ARM: b.L: core switcher code
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 13:56:10 +07:00
|
|
|
/*
|
|
|
|
* arch/arm/include/asm/bL_switcher.h
|
|
|
|
*
|
|
|
|
* Created by: Nicolas Pitre, April 2012
|
|
|
|
* Copyright: (C) 2012-2013 Linaro Limited
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef ASM_BL_SWITCHER_H
|
|
|
|
#define ASM_BL_SWITCHER_H
|
|
|
|
|
2012-12-11 00:19:58 +07:00
|
|
|
#include <linux/compiler.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
|
2013-05-23 01:08:16 +07:00
|
|
|
typedef void (*bL_switch_completion_handler)(void *cookie);
|
|
|
|
|
|
|
|
int bL_switch_request_cb(unsigned int cpu, unsigned int new_cluster_id,
|
|
|
|
bL_switch_completion_handler completer,
|
|
|
|
void *completer_cookie);
|
|
|
|
static inline int bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
|
|
|
|
{
|
|
|
|
return bL_switch_request_cb(cpu, new_cluster_id, NULL, NULL);
|
|
|
|
}
|
ARM: b.L: core switcher code
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 13:56:10 +07:00
|
|
|
|
2012-12-11 00:19:58 +07:00
|
|
|
/*
|
|
|
|
* Register here to be notified about runtime enabling/disabling of
|
|
|
|
* the switcher.
|
|
|
|
*
|
|
|
|
* The notifier chain is called with the switcher activation lock held:
|
|
|
|
* the switcher will not be enabled or disabled during callbacks.
|
|
|
|
* Callbacks must not call bL_switcher_{get,put}_enabled().
|
|
|
|
*/
|
|
|
|
#define BL_NOTIFY_PRE_ENABLE 0
|
|
|
|
#define BL_NOTIFY_POST_ENABLE 1
|
|
|
|
#define BL_NOTIFY_PRE_DISABLE 2
|
|
|
|
#define BL_NOTIFY_POST_DISABLE 3
|
|
|
|
|
|
|
|
#ifdef CONFIG_BL_SWITCHER
|
|
|
|
|
|
|
|
int bL_switcher_register_notifier(struct notifier_block *nb);
|
|
|
|
int bL_switcher_unregister_notifier(struct notifier_block *nb);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use these functions to temporarily prevent enabling/disabling of
|
|
|
|
* the switcher.
|
|
|
|
* bL_switcher_get_enabled() returns true if the switcher is currently
|
|
|
|
* enabled. Each call to bL_switcher_get_enabled() must be followed
|
|
|
|
* by a call to bL_switcher_put_enabled(). These functions are not
|
|
|
|
* recursive.
|
|
|
|
*/
|
2012-12-11 00:19:57 +07:00
|
|
|
bool bL_switcher_get_enabled(void);
|
|
|
|
void bL_switcher_put_enabled(void);
|
|
|
|
|
2013-02-11 21:39:19 +07:00
|
|
|
int bL_switcher_trace_trigger(void);
|
2013-02-13 23:20:44 +07:00
|
|
|
int bL_switcher_get_logical_index(u32 mpidr);
|
2013-02-11 21:39:19 +07:00
|
|
|
|
2012-12-11 00:19:58 +07:00
|
|
|
#else
|
|
|
|
static inline int bL_switcher_register_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int bL_switcher_unregister_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool bL_switcher_get_enabled(void) { return false; }
|
|
|
|
static inline void bL_switcher_put_enabled(void) { }
|
2013-02-11 21:39:19 +07:00
|
|
|
static inline int bL_switcher_trace_trigger(void) { return 0; }
|
2013-02-13 23:20:44 +07:00
|
|
|
static inline int bL_switcher_get_logical_index(u32 mpidr) { return -EUNATCH; }
|
2012-12-11 00:19:58 +07:00
|
|
|
#endif /* CONFIG_BL_SWITCHER */
|
|
|
|
|
ARM: b.L: core switcher code
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 13:56:10 +07:00
|
|
|
#endif
|