[linux-yocto] [PATCH 16/17] kernel/smp: Allow smp_call_function_single() to be called with a function that doesn't return.
Paul, Charlie
Charlie.Paul at windriver.com
Wed Jul 16 09:44:47 PDT 2014
Bruce
LSI decided to cancel this patch but wants the rest in.
Can I send you the pull request?
Charlie
-----Original Message-----
From: Bruce Ashfield [mailto:bruce.ashfield at windriver.com]
Sent: Friday, July 11, 2014 9:43 AM
To: Charlie Paul; linux-yocto at yoctoproject.org
Cc: Paul, Charlie
Subject: Re: [PATCH 16/17] kernel/smp: Allow smp_call_function_single() to be called with a function that doesn't return.
On 14-07-09 01:26 PM, Charlie Paul wrote:
> From: John Jacques <john.jacques at lsi.com>
>
> When we do a reset (core, not chip or system), the core that is doing
> the reset has to tell all the other cores to reset. To do this, we call smp_call_function_single().
> In this case the function specified in smp_call_function_single() doesn't return.
> The "wait" parameter (third argument, described as '@wait: If true,
> wait until function has completed on other CPUs.') doesn't work as described without the patch.
>
> Without the patch, smp_call_function_single() doesn't return when the
> function indicated resets the called core. The comments indicate that it should, but it does not.
>
> Signen-off-by: John Jacques <john.jacques at lsi.com>
> ---
> kernel/smp.c | 24 +++++++++++++++++++-----
> 1 file changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c index d5f3238..040b2b1 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -17,14 +17,14 @@
> static struct {
> struct list_head queue;
> raw_spinlock_t lock;
> -} call_function __cacheline_aligned_in_smp =
> - {
> +} call_function __cacheline_aligned_in_smp = {
> .queue = LIST_HEAD_INIT(call_function.queue),
> .lock = __RAW_SPIN_LOCK_UNLOCKED(call_function.lock),
> };
>
> enum {
> CSD_FLAG_LOCK = 0x01,
> + CSD_FLAG_NOWAIT = 0x02,
> };
>
> struct call_function_data {
> @@ -268,6 +268,8 @@ void
> generic_smp_call_function_single_interrupt(void)
>
> while (!list_empty(&list)) {
> struct call_single_data *data;
> + void (*func)(void *);
> + void *info;
>
> data = list_entry(list.next, struct call_single_data, list);
> list_del(&data->list);
> @@ -278,12 +280,21 @@ void generic_smp_call_function_single_interrupt(void)
> * so save them away before making the call:
> */
> data_flags = data->flags;
> -
> - data->func(data->info);
> + func = data->func;
> + info = data->info;
>
> /*
> + * Unlock before calling func so that func never has
> + * to return.
> + *
> * Unlocked CSDs are valid through generic_exec_single():
> */
> + if ((data_flags & CSD_FLAG_LOCK) &&
> + (data_flags & CSD_FLAG_NOWAIT))
> + csd_unlock(data);
> +
> + func(info);
> +
> if (data_flags & CSD_FLAG_LOCK)
> csd_unlock(data);
> }
> @@ -337,6 +348,9 @@ int smp_call_function_single(int cpu,
> smp_call_func_t func, void *info,
>
> csd_lock(data);
>
> + if (!wait)
> + data->flags |= CSD_FLAG_NOWAIT;
> +
> data->func = func;
> data->info = info;
> generic_exec_single(cpu, data, wait); @@ -672,7 +686,7 @@
> EXPORT_SYMBOL(nr_cpu_ids);
> /* An arch may set nr_cpu_ids earlier if needed, so this would be redundant */
> void __init setup_nr_cpu_ids(void)
> {
> - nr_cpu_ids = find_last_bit(cpumask_bits(cpu_possible_mask),NR_CPUS) + 1;
> + nr_cpu_ids = find_last_bit(cpumask_bits(cpu_possible_mask), NR_CPUS)
> ++ 1;
I had some structural comments about this patch, that aren't addressed.
The rest of the series looks ok though.
Bruce
> }
>
> /* Called by boot processor to activate the rest. */
>
More information about the linux-yocto
mailing list