[linux-yocto] [PATCH 11/23] hugetlb: prep_compound_gigantic_page(): drop __init marker
Yang Shi
yang.shi at windriver.com
Wed Jul 30 19:16:15 PDT 2014
From: Luiz Capitulino <lcapitulino at redhat.com>
commit 2906dd52831b6049e1d4d9b12f6f234bf2f64a03 upstream
The HugeTLB subsystem uses the buddy allocator to allocate hugepages
during runtime. This means that hugepages allocation during runtime is
limited to MAX_ORDER order. For archs supporting gigantic pages (that
is, page sizes greater than MAX_ORDER), this in turn means that those
pages can't be allocated at runtime.
HugeTLB supports gigantic page allocation during boottime, via the boot
allocator. To this end the kernel provides the command-line options
hugepagesz= and hugepages=, which can be used to instruct the kernel to
allocate N gigantic pages during boot.
For example, x86_64 supports 2M and 1G hugepages, but only 2M hugepages
can be allocated and freed at runtime. If one wants to allocate 1G
gigantic pages, this has to be done at boot via the hugepagesz= and
hugepages= command-line options.
Now, gigantic page allocation at boottime has two serious problems:
1. Boottime allocation is not NUMA aware. On a NUMA machine the kernel
evenly distributes boottime allocated hugepages among nodes.
For example, suppose you have a four-node NUMA machine and want
to allocate four 1G gigantic pages at boottime. The kernel will
allocate one gigantic page per node.
On the other hand, we do have users who want to be able to specify
which NUMA node gigantic pages should allocated from. So that they
can place virtual machines on a specific NUMA node.
2. Gigantic pages allocated at boottime can't be freed
At this point it's important to observe that regular hugepages allocated
at runtime don't have those problems. This is so because HugeTLB
interface for runtime allocation in sysfs supports NUMA and runtime
allocated pages can be freed just fine via the buddy allocator.
This series adds support for allocating gigantic pages at runtime. It
does so by allocating gigantic pages via CMA instead of the buddy
allocator. Releasing gigantic pages is also supported via CMA. As this
series builds on top of the existing HugeTLB interface, it makes gigantic
page allocation and releasing just like regular sized hugepages. This
also means that NUMA support just works.
For example, to allocate two 1G gigantic pages on node 1, one can do:
# echo 2 > \
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
And, to release all gigantic pages on the same node:
# echo 0 > \
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
Please, refer to patch 5/5 for full technical details.
Finally, please note that this series is a follow up for a previous series
that tried to extend the command-line options set to be NUMA aware:
http://marc.info/?l=linux-mm&m=139593335312191&w=2
During the discussion of that series it was agreed that having runtime
allocation support for gigantic pages was a better solution.
This patch (of 5):
This function is going to be used by non-init code in a future
commit.
Signed-off-by: Luiz Capitulino <lcapitulino at redhat.com>
Reviewed-by: Davidlohr Bueso <davidlohr at hp.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
Reviewed-by: Zhang Yanfei <zhangyanfei at cn.fujitsu.com>
Cc: Marcelo Tosatti <mtosatti at redhat.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Cc: Davidlohr Bueso <davidlohr at hp.com>
Cc: David Rientjes <rientjes at google.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki at jp.fujitsu.com>
Cc: Yinghai Lu <yinghai at kernel.org>
Cc: Rik van Riel <riel at redhat.com>
Cc: Naoya Horiguchi <n-horiguchi at ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Yang Shi <yang.shi at windriver.com>
---
mm/hugetlb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 77b27bb2..ee4c057 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -689,8 +689,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
put_page(page); /* free it into the hugepage allocator */
}
-static void __init prep_compound_gigantic_page(struct page *page,
- unsigned long order)
+static void prep_compound_gigantic_page(struct page *page, unsigned long order)
{
int i;
int nr_pages = 1 << order;
--
2.0.2
More information about the linux-yocto
mailing list