[linux-yocto] [PATCH 20/28] sched/numa: Retry placement more frequently when misplaced
Yang Shi
yang.shi at windriver.com
Mon Aug 18 13:45:30 PDT 2014
From: Rik van Riel <riel at redhat.com>
commit 5085e2a328849bdee6650b32d52c87c3788ab01c upstream
When tasks have not converged on their preferred nodes yet, we want
to retry fairly often, to make sure we do not migrate a task's memory
to an undesirable location, only to have to move it again later.
This patch reduces the interval at which migration is retried,
when the task's numa_scan_period is small.
Signed-off-by: Rik van Riel <riel at redhat.com>
Tested-by: Vinod Chegu <chegu_vinod at hp.com>
Acked-by: Mel Gorman <mgorman at suse.de>
Signed-off-by: Peter Zijlstra <peterz at infradead.org>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Link: http://lkml.kernel.org/r/1397235629-16328-3-git-send-email-riel@redhat.com
Signed-off-by: Ingo Molnar <mingo at kernel.org>
Signed-off-by: Yang Shi <yang.shi at windriver.com>
---
kernel/sched/fair.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 813cd8e..1eda55e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1343,12 +1343,15 @@ static int task_numa_migrate(struct task_struct *p)
/* Attempt to migrate a task to a CPU on the preferred node. */
static void numa_migrate_preferred(struct task_struct *p)
{
+ unsigned long interval = HZ;
+
/* This task has no NUMA fault statistics yet */
if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults_memory))
return;
/* Periodically retry migrating the task to the preferred node */
- p->numa_migrate_retry = jiffies + HZ;
+ interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16);
+ p->numa_migrate_retry = jiffies + interval;
/* Success if task is already running on preferred CPU */
if (task_node(p) == p->numa_preferred_nid)
--
2.0.2
More information about the linux-yocto
mailing list