mm: hugetlb: fix non-atomic enqueue of huge page

If a huge page is enqueued under the protection of hugetlb_lock, then the
operation is atomic and safe.

Signed-off-by: Hillf Danton <dhillf@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: <stable@vger.kernel.org>		[2.6.37+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Hillf Danton 2011-12-28 15:57:16 -08:00 committed by Linus Torvalds
parent 34845636a1
commit b0365c8d0c

View File

@ -901,7 +901,6 @@ static int gather_surplus_pages(struct hstate *h, int delta)
h->resv_huge_pages += delta;
ret = 0;
spin_unlock(&hugetlb_lock);
/* Free the needed pages to the hugetlb pool */
list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
if ((--needed) < 0)
@ -915,6 +914,7 @@ static int gather_surplus_pages(struct hstate *h, int delta)
VM_BUG_ON(page_count(page));
enqueue_huge_page(h, page);
}
spin_unlock(&hugetlb_lock);
/* Free unnecessary surplus pages to the buddy allocator */
free: