Make most places that use spu_acquire/spu_acquire_saved interruptible,
this allows getting out of the spufs code when e.g. pressing ctrl+c.
There are a few places where we get called e.g. from spufs teardown
routines were we can't simply err out so these are left with a comment.
For now I've also not touched the poll routines because it's open what
libspe would expect in terms of interrupted system calls.
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The simple attr macros currently used by spufs can't deal with the
handlers returning errors, which is required to make the state_mutex
interruptible. This adds a local copy that allows for an error
return from the get/set handlers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Change spufs_spu_run so that the context is queued directly to the
scheduler and the controlling thread advances directly to spufs_wait()
for spe errors and exceptions.
nosched contexts are treated the same as before.
Fixes from Christoph Hellwig <hch@lst.de>
Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This changes the spu context switch code to not write to reserved bits
of spu interrupt status register.
The architecture book says the reserved fields should be set to zero.
Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Need to re-check priority after dropping lock. Otherwise, a
more favored context may be preempted.
Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This cleans up spu_run_init so that it does all of the spu
initialization for spufs_run_spu. It initializes the spu context as
much as possible before it activates the spu and writes the runcntl
register.
Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Based on original patches from
Arnd Bergmann <arnd.bergman@de.ibm.com>; and
Luke Browning <lukebr@linux.vnet.ibm.com>
Currently, spu contexts need to be loaded to the SPU in order to take
class 0 and class 1 exceptions.
This change makes the actual interrupt-handlers much simpler (ie,
set the exception information in the context save area), and defers the
handling code to the spufs_handle_class[01] functions, called from
spufs_run_spu.
This should improve the concurrency of the spu scheduling leading to
greater SPU utilization when SPUs are overcommited.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and
use them instead of magic numbers when we're setting or checking for
these interrupts.
Also, add a #define for the class 2 mailbox threshold interrupt mask.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When doing a poll on the mbox stat file of a swapped-out context, we
clear the class 0 interrupt status, rather than the class 2 interrupt
status.
This change corrects the poll operation to clear the correct interrupt.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This change encapsulates the spu_privcntl_RW register so that it can
be written through backing ops. This is necessary so that spu contexts
can be initialized and queued to the scheduler in spufs_run_spu.
Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This change disables the logic that faults-in spu contexts under the
covers from the page fault handler. When a fault requires a runnable
context, the handler will block until the context is scheduled by
other means.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently, part of the spufs code (switch.o, lscsa_alloc.o and fault.o)
is compiled directly into the kernel.
This change moves these components of spufs into the kernel.
The lscsa and switch objects are fairly straightforward to move in.
For the fault.o module, we split the fault-handling code into two
parts: a/p/p/c/spu_fault.c and a/p/p/c/spufs/fault.c. The former is for
the in-kernel spu_handle_mm_fault function, and we move the rest of the
fault-handling code into spufs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Fix a few typos in the spufs scheduler comments
Signed-off-by: Julio M. Merino Vidal <jmerino@ac.upc.edu>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add platform specific SPU run control routines to the spufs. The current
spufs implementation uses the SPU master run control bit (MFC_SR1[S]) to
control SPE execution, but the PS3 hypervisor does not support the use of
this feature.
This change adds the run control wrapper routies spu_enable_spu() and
spu_disable_spu(). The bare metal routines use the master run control
bit, and the PS3 specific routines use the priv2 run control register.
An outstanding enhancement for the PS3 would be to add a guard to check
for incorrect access to the spu problem state when the spu context is
disabled. This check could be implemented with a flag added to the spu
context that would inhibit mapping problem state pages, and a routine
to unmap spu problem state pages. When the spu is enabled with
ps3_enable_spu() the flag would be set allowing pages to be mapped,
and when the spu is disabled with ps3_disable_spu() the flag would be
cleared and mapped problem state pages would be unmapped.
Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There should be an of_node_put when breaking out of a loop that iterates
using for_each_node_by_type.
This was detected and fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
identifier d;
type T;
expression e;
iterator for_each_node_by_type;
@@
T *d;
...
for_each_node_by_type(d,...)
{... when != of_node_put(d)
when != e = d
(
return d;
|
+ of_node_put(d);
? return ...;
)
...}
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Christian Krafft <krafft@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Erb <djerb@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This splits the machine definition for celleb into two definitions,
one for celleb_beat, and the other for celleb_native. Though this
looks complex because of sorting some functions, there are no
more semantic changes than that for the splitting.
Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This makes mmio_nvram_init() callable unconditionally by providing
a dummy definition when CONFIG_MMIO_NVRAM is not defined.
Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We're currently getting a warning from not checking the result of
sysfs_create_group, which is declared as __must_check.
This change introduces appropriate error-handling for
spu_add_sysdev_attr_group()
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Currently, we have a possibilty that the SLBs setup during context
switch don't cover the entirety of the necessary lscsa and code
regions, if these regions cross a segment boundary.
This change checks the start and end of each region, and inserts a SLB
entry for each, if unique. We also remove the assumption that the
spu_save_code and spu_restore_code reside in the same segment, by using
the specific code array for save and restore.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Add a function spu_64k_pages_available(), so that we can abstract the
explicity use of mmu_psize_defs() in lssca_alloc.c
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Now that we have a helper function to setup a SPU SLB, use it for
__spu_trap_data_seq.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Currently, the SPU context switch code (spufs/switch.c) sets up the
SPU's SLBs directly, which requires some low-level mm stuff.
This change moves the kernel SLB setup to spu_base.c, by exposing
a function spu_setup_kernel_slbs() to do this setup. This allows us
to remove the low-level mm code from switch.c, making it possible
to later move switch.c to the spufs module.
Also, add a struct spu_slb for the cases where we need to deal with
SLB entries.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This patch changes the way we check for the existence of
vicinity property in spe device nodes.
The new implementation does not depend on having an initialized
cbe_spu_info[0].spus, and checks for presence of vicinity in all
nodes, not only in the first one.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Export force_sig_info to allow signals to be sent from a modular spufs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The pm_interval register in the Cell PMU is read/write, but was implemented in
the kernel as write-only. Previously, the written value was saved in a "shadow"
copy so calls to cbe_read_pm() could return the value.
Perfmon2 needs to be able to read the current values of pm_interval, so change
cbe_read_pm() to read the actual register instead of the "shadow" copy. There
is currently no code in the kernel that tries to read the pm_interval register
with cbe_read_pm() (expecting to receive the "shadow" value), so this should
not break any existing code.
Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This adds support for native CBE on Celleb, that is, without the BEAT
hypervisor. Many codes in platforms/cell/ are used in native CBE
environment.
Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This fixes the following link error with CONFIG_PPC_CELL_NATIVE=y and
CONFIG_PPC_CELL_BLADE=n:
arch/powerpc/platforms/built-in.o: In function `.cell_setup_arch':
setup.c:(.init.text+0xe80): undefined reference to `.mmio_nvram_init'
Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This cleans up the SMT thread handling, removing some hard coded
assumptions and providing a set of helpers to convert between linux
cpu numbers, thread numbers and cores.
This implementation requires the number of threads per core to be a
power of 2 and identical on all cores in the system, but it's an
implementation detail, not an API requirement and so this limitation
can be lifted in the future if anybody ever needs it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We can currently cause an oops by repeatedly creating and destroying
contexts, while doing getdents() calls on the "/spu" directory.
This is due to the context's top-level dentry remaining hashed while
the context is being destroyed.
Fix this by unhashing the context's dentry with the
dentry->d_inode->i_mutex held. This way, we'll hit the check for
d_unhashed in dentry_readdir, and won't be included in the
list of subdirs for /spu.
test: spufs-testsuite:tests/01-spu_create/07-destroy-vs-readdir-race
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This fixes the error
error: implicit declaration of function "udbg_printf"
We have a few spots where we reference udbg_printf() without #including
udbg.h. These are within #ifdef DEBUG blocks, so unnoticed until we do
a #define DEBUG or #define DEBUG_LOW nearby.
Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Fix two build errors on powerpc allyesconfig + CONFIG_SMP=n:
arch/powerpc/platforms/built-in.o: In function `cpu_affinity_set':
arch/powerpc/platforms/cell/spu_priv1_mmio.c:78: undefined reference to `.iic_get_target_id'
arch/powerpc/platforms/built-in.o: In function `iic_init_IRQ':
arch/powerpc/platforms/cell/interrupt.c:397: undefined reference to `.iic_setup_cpu'
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
* Convert files to UTF-8.
* Also correct some people's names
(one example is Eißfeldt, which was found in a source file.
Given that the author used an ß at all in a source file
indicates that the real name has in fact a 'ß' and not an 'ss',
which is commonly used as a substitute for 'ß' when limited to
7bit.)
* Correct town names (Goettingen -> Göttingen)
* Update Eberhard Mönkeberg's address (http://lkml.org/lkml/2007/1/8/313)
Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Slab constructors currently have a flags parameter that is never used. And
the order of the arguments is opposite to other slab functions. The object
pointer is placed before the kmem_cache pointer.
Convert
ctor(void *object, struct kmem_cache *s, unsigned long flags)
to
ctor(struct kmem_cache *s, void *object)
throughout the kernel
[akpm@linux-foundation.org: coupla fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly
from startup and CPU HOTPLUG functions.
Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
msic_dcr_read() doesn't really do anything useful, just replace it with
direct calls to dcr_read().
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Now that all users of dcr_read()/dcr_write() add the dcr_host_t.base, we
can save them the trouble and do it in dcr_read()/dcr_write().
As some background to why we just went through all this jiggery-pokery,
benh sayeth:
Initially the goal of the dcr_read/dcr_write routines was to operate like
mfdcr/mtdcr which take absolute DCR numbers. The reason is that on 4xx
hardware, indirect DCR access is a pain (goes through a table of
instructions) and it's useful to have the compiler resolve an absolute DCR
inline.
We decided that wasn't worth the API bastardisation since most places
where absolute DCR values are used are low level 4xx-only code which may
as well continue using mfdcr/mtdcr, while the new API is designed for
device "instances" that can exist on 4xx and Axon type platforms and may
be located at variable DCR offsets.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
* master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq:
[CPUFREQ] Don't take semaphore in cpufreq_quick_get()
[CPUFREQ] Support different families in fid/did to frequency conversion
[CPUFREQ] cpufreq_stats: misc cpuinit section annotations
[CPUFREQ] implement !CONFIG_CPU_FREQ stub for cpufreq_unregister_notifier()
[CPUFREQ] mark hotplug notifier callback as __cpuinit
[CPUFREQ] Only check for transition latency on problematic governors (kconfig fix)
[CPUFREQ] allow ondemand and conservative cpufreq governors to be used as default
[CPUFREQ] move policy's governor initialisation out of low-level drivers into cpufreq core
[CPUFREQ] Longhaul - Add support for PM133 northbridge
[CPUFREQ] x86: use num_online_nodes to get physical cpus numbers for
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (408 commits)
[POWERPC] Add memchr() to the bootwrapper
[POWERPC] Implement logging of unhandled signals
[POWERPC] Add legacy serial support for OPB with flattened device tree
[POWERPC] Use 1TB segments
[POWERPC] XilinxFB: Allow fixed framebuffer base address
[POWERPC] XilinxFB: Add support for custom screen resolution
[POWERPC] XilinxFB: Use pdata to pass around framebuffer parameters
[POWERPC] PCI: Add 64-bit physical address support to setup_indirect_pci
[POWERPC] 4xx: Kilauea defconfig file
[POWERPC] 4xx: Kilauea DTS
[POWERPC] 4xx: Add AMCC Kilauea eval board support to platforms/40x
[POWERPC] 4xx: Add AMCC 405EX support to cputable.c
[POWERPC] Adjust TASK_SIZE on ppc32 systems to 3GB that are capable
[POWERPC] Use PAGE_OFFSET to tell if an address is user/kernel in SW TLB handlers
[POWERPC] 85xx: Enable FP emulation in MPC8560 ADS defconfig
[POWERPC] 85xx: Killed <asm/mpc85xx.h>
[POWERPC] 85xx: Add cpm nodes for 8541/8555 CDS
[POWERPC] 85xx: Convert mpc8560ads to the new CPM binding.
[POWERPC] mpc8272ads: Remove muram from the CPM reg property.
[POWERPC] Make clockevents work on PPC601 processors
...
Fixed up conflict in Documentation/powerpc/booting-without-of.txt manually.
This makes the kernel use 1TB segments for all kernel mappings and for
user addresses of 1TB and above, on machines which support them
(currently POWER5+, POWER6 and PA6T).
We detect that the machine supports 1TB segments by looking at the
ibm,processor-segment-sizes property in the device tree.
We don't currently use 1TB segments for user addresses < 1T, since
that would effectively prevent 32-bit processes from using huge pages
unless we also had a way to revert to using 256MB segments. That
would be possible but would involve extra complications (such as
keeping track of which segment size was used when HPTEs were inserted)
and is not addressed here.
Parts of this patch were originally written by Ben Herrenschmidt.
Signed-off-by: Paul Mackerras <paulus@samba.org>
There is no good reason for board platform code to mess with the
ROOT_DEV. Remove it from all in-tree platforms except powermac.
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Based on BenH's earlier work, this is a new version of the EMAC driver
for the built-in ethernet found on PowerPC 4xx embedded CPUs. The
same ASIC is also found in the Axon bridge chip. This new version is
designed to work in the arch/powerpc tree, using the device tree to
probe the device, rather than the old and ugly arch/ppc OCP layer.
This driver is designed to sit alongside the old driver (that lies in
drivers/net/ibm_emac and this one in drivers/net/ibm_newemac). The
old driver is left in place to support arch/ppc until arch/ppc itself
reaches its final demise (not too long now, with luck).
This driver still has a number of things that could do with cleaning
up, but I think they can be fixed up after merging. Specifically:
- Should be adjusted to properly use the dma mapping API.
Axon needs this.
- Probe logic needs reworking, in conjuction with the general
probing code for of_platform devices. The dependencies here between
EMAC, MAL, ZMII etc. make this complicated. At present, it usually
works, because we initialize and register the sub-drivers before the
EMAC driver itself, and (being in driver code) runs after the devices
themselves have been instantiated from the device tree.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This adds definitions for the Cell memory controller registers (at
least some of them) for use by the EDAC driver for ECC error reporting.
It also expose the said MIC as a platform device that can be used
by the EDAC driver to match on.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The new Cell EDAC driver needs that file, oprofile also does ugly
path tricks to get to it, it's time to move it to asm-powerpc. While
at it, rename it to be consistent with cell-pmu.h (and dashes look
nicer than underscores anyway).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Bryan Wu <bryan.wu@analog.com>
Cc: Andi Kleen <ak@suse.de>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>
Now that dcr_host_t contains the base address, we can use that in the
axon_msi code, rather than storing it separately.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
pci_device_to_OF_node() returns the device node attached to a PCI device,
but doesn't actually grab a reference - we need to do it ourselves.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The commit 8b6f50ef1d seems to have
been affected by a mismerge of a duplicate patch
(d054b36ffd) - both the
spufs_dir_contents and spufs_dir_nosched_contents have been given
write-only signal notification files.
This change reverts the spufs_dir_contents array to use the
readable signal notification file implementation.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The cast to u32 * isn't required, of_get_property returns a void *.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
find_victim can dereference a NULL pointer when iterating over the list
of victim spus because list_mutex only guarantees spu->ct to be stable,
but of course not to be non-NULL.
Also fix find_victim to not call spu_unbind_context without list_mutex
because that violates the above guarantee.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch adds DEFINE_SPUFS_ATTRIBUTE(), a wrapper around
DEFINE_SIMPLE_ATTRIBUTE which does the specified locking for the get
routine for us.
Unfortunately we need two get routines (a locked and unlocked version) to
support the coredump code. This hides one of those (the locked version)
inside the macro foo.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently the spu coredump code doesn't respect the ulimit, it should.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Rework spufs_coredump_extra_notes_write() to check for and return errors.
If we're coredumping to a pipe we can't trust file->f_pos, we need to
maintain the foffset value passed to us. The cleanest way to do this is
to have the low level write routine increment foffset when we've
successfully written.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
To start with, arch_notes_size() etc. is a little too ambiguous a name for
my liking, so change the function names to be more explicit.
Calling through macros is ugly, especially with hidden parameters, so don't
do that, call the routines directly.
Use ARCH_HAVE_EXTRA_ELF_NOTES as the only flag, and based on it decide
whether we want the extern declarations or the empty versions.
Since we have empty routines, actually use them in the coredump code to
save a few #ifdefs.
We want to change the handling of foffset so that the write routine updates
foffset as it goes, instead of using file->f_pos (so that writing to a pipe
works). So pass foffset to the write routine, and for now just set it to
file->f_pos at the end of writing.
It should also be possible for the write routine to fail, so change it to
return int and treat a non-zero return as failure.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Because spufs might be built as a module, we can't have other parts of the
kernel calling directly into it, we need stub routines that check first if the
module is loaded.
Currently we have two structures which hold callbacks for these stubs, the
syscalls are in spufs_calls and the coredump calls are in spufs_coredump_calls.
In both cases the logic for registering/unregistering is essentially the same,
so we can simplify things by combining the two.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The SPUFS attribute get routines take a void * because the generic attribute
code doesn't know what sort of data it's passing around.
However our internal __spufs_get_foo() routines can take a spu_context *
directly, which saves plonking it in and out of a void * again.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The spufs_coredump_read array is NULL terminated, and we also store the size.
We only need one or the other, and the other arrays in file.c are NULL
terminated, so do that.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Because the SPU coredump code might be built as part of a module (spufs),
we have a stub which is called by the coredump code, this routine then calls
into spufs if it's loaded.
Unfortunately the stub returns -ENOSYS if spufs is not loaded, which is
interpreted by the coredump code as an extra note size of -38 bytes. This
leads to a corrupt core dump.
If spufs is not loaded there will be no SPU ELF notes to write, and so the
extra notes size will be == 0.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The routine to dump the local store, __spufs_mem_read(), does not take the
spu_lslr_RW value into account - so we shouldn't check it when we're
calculating the size either.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Unfortunately GDB expects some of the SPU coredump values to be identical
in format to what is found in spufs. This means we need to dump some of
the values as ASCII strings, not the actual values.
Because we don't know what the values will be, we always print the values
with the format "0x%.16lx", that way we know the result will be 19 bytes.
do_coredump_read() doesn't take a __user buffer, so remove the annotation,
and because we know that it's safe to just snprintf() directly to it.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The spufs_coredump_reader array contains the size of the data that will be
returned by the read routine. Currently these are specified as literals,
and though some are obvious, sizeof(u32) == 4, others are not, 69 * 8 == ???
Instead, use sizeof() whatever type is returned by each routine, or in
the case of spufs_mem_read() the #define LS_SIZE.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
It makes sense to stop the SPU processes as soon as possible. Also if we
dont acquire_saved() I think there's a possibility that the value in
csa.priv2.spu_lslr_RW won't be accurate.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove the ctx_info struct entirely, and also the ctx_info_list. This
fixes a race where two processes can clobber each other's ctx_info structs.
Instead of using the list, we just repeat the search through the file
descriptor table.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Extract the logic for searching through the file descriptors for spu contexts
into a separate routine, coredump_next_context(), so we can use it elsewhere
in future. In the process we flatten the for loop, and move the NOSCHED test
into coredump_next_context().
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We don't want SPE programs to be able to flood the kernel log by
invoking the SPE callback handler, so don't enable DEBUG for
spu_callbacks.c by default.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Based on an original patch from Masato Noguchi
<Masato.Noguchi@jp.sony.com>.
We're currently not restoring the SPE decrementer as specified by the
CBE handbook. This change fixes our implementation to match, and makes
the function read more like the docs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
At present, a built-in spufs will not use the spufs_calls callbacks, but
directly call sys_spu_create. This saves us an indirect branch, but
means we have duplicated functions - one for CONFIG_SPU_FS=y and one for
=m.
This change unifies the spufs syscall path, and provides access to the
spufs_calls structure through a get/put pair. At present, the only user
of the spufs_calls structure is spu_syscalls.c, but this will facilitate
adding the coredump calls later.
Everyone likes numbers, right? Here's a before/after comparison with
CONFIG_SPU_FS=y, doing spu_create(); close(); 64k times.
Before:
[jk@cell ~]$ time ./spu_create
performing 65536 spu_create calls
real 0m24.075s
user 0m0.146s
sys 0m23.925s
After:
[jk@cell ~]$ time ./spu_create
performing 65536 spu_create calls
real 0m24.777s
user 0m0.141s
sys 0m24.631s
So, we're adding around 11us per syscall, at the benefit of having
only one syscall path.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Affinity reference point location (gang->aff_ref_spu) is reset
when the whole gang is descheduled. However, the last member of
a gang can be descheduled while we are trying to schedule another
member of the gang. This was leading to a race condition, and
the code was using gang->aff_ref_spu in an unsafe manner.
By holding the gang->aff_mutex a little bit longer, and increment
gang->aff_sched_count (which controls when gang->aff_ref_spu
should be reset) a little bit earlier, the problem is fixed.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
According to the comment in spufs_init_isolated_loader(), the isolated
loader should be aligned on a 16 byte boundary.
ARCH_{KMALLOC,SLAB}_MINALIGN is not defined so only 8 byte alignment is
guaranteed.
This enforces alignment via __get_free_pages.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Based on an initial patch from Sebastian Siewior
<sebastian@breakpoint.cc>
spu_harvest isn't used, remove it.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
do_spu_create doesn't need the asmlinkage qualifier; remove it.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There are a few symbols used only in one file within spufs; this change
makes them static where suitable.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The most common match semantic is an exact match based on the device node.
So provide a default implementation that does this, and hook it up if no
match routine is specified.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The majority of irq_host implementations (3 out of 4) are associated
with a device_node, and need to stash it somewhere. Rather than having
it somewhere different for each host, add an optional device_node pointer
to the irq_host structure.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The Cell BE Architecture spec states that the SPU MFC Class 0 interrupt
is edge-triggered. The current spu interrupt handler assumes this
behavior and does not clear the interrupt status.
The PS3 hypervisor visualizes all SPU interrupts as level, and on return
from the interrupt handler the hypervisor will deliver a new virtual
interrupt for any unmasked interrupts which for which the status has not
been cleared. This fix clears the interrupt status in the interrupt
handler.
Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This fixes a major bug which was happening when a SPU thread advances
its execution right after being restored to a SPU. A potentially
outdated NPC value was being (re)written to the SPU.
So, spu_run_init, in this case, was either not doing anything relevant,
or breaking the execution of the SPU thread.
This fixes a common problem of losing a mailbox write when it was done
to a saved context.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When a process writes into the inbound spu mailbox (wbox) while the
context is saved, we accidentally break the contents of the mb_stat_R
register by clearing other entries of the mailbox status register. This
can cause the user side to hang.
This change fixes the problem by only altering the appropriate bits
of the mailbox status register during a backing-store write.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This fixes a regression introduced with 2.6.23-rc4 after on some
confusion about the device tree interfaces.
IBM QS21 device trees provide "physical-id", so we changed the code to
run on that and remain compatible with all IBM machines.
However, the Toshiba Celleb device tree provides the "unit-id" property,
which was in the Linux code, but never used in this way on IBM hardware.
Legacy device tree used the reg property for the physical id of an spe.
This patch fixes find_spu_unit_number to look for the spu id in that order.
The length is checked to avoid misinterpretation in case the attributes
unit-id or reg do not contain the id.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Jeremy Kerr <jk@ozlabs.org>
The Cell Broadband Engine has a method of injecting a
system-reset-exception from an external source into the
operating system, which should trigger the regular behaviour
of entering xmon or kdump.
Unfortunately, the exception handler cannot distinguish it from
other interrupt causes by the SRR1 register, which gets used
for this on Power 6 and others.
IBM Blade servers that want to support triggering the
system reset exception using a pinhole button in the front
panel therefore use an extra register to determine the
reset cause.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
--
Signed-off-by: Paul Mackerras <paulus@samba.org>
Legacy device tree used the reg property for the physical id of an
spe. On newer device tree layouts the reg property contains the
"correct" value in the reg attribute. So there has been intoduced the
"physical-id" on newer devicetree layouts. The id is stored by
spu_manage into the spu struct as spe_id. cbe_thermal has been
changed to use the spu->spe_id. There's no need for the thermal code
to check devicetree attributes for itself.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Cc: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
At present, spu_create with an invalid neighbo(u)r will return -ENOSYS,
not -EBADF, but only when spufs.o is built as a module.
This change adds the appropriate errno, making the behaviour the same
as the built-in case.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch moves affinity initialization code from spu_base.c to a
new spu_management_of_ops function (init_affinity), which is empty
in the case of PS3. This fixes a linking problem that was happening
when compiling for PS3.
Also, some small code style changes were made.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch fixes affinity reference point placement, which was not being
done in some situations, after the introduction of node_allowed() calls.
The previously used parameter, 'ctx', is just the iterator of the
previous list_for_each_entry_reverse loop, and its value might be
invalid at the end of the loop. Also, the right context to seek
for information when defining the reference ctx location
_is_ the reference ctx.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently we calculate the first timeslice for every context
incorrectly - alloc_spu_context calls spu_set_timeslice before we set
ctx->prio so we always calculate the longest possible timeslice for the
lowest possible priority.
This patch makes sure to update the schedule-related fields before
calculating the timeslice and also makes sure we update the timeslice for
a non-running context when entering spu_run so a priority change affects
the context as soon as possible.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We currently initialize cbe_spu_info[].spus in both init_spu_base and
spu_sched_init. The initialise in spu_sched_init clears the SPU list,
so we end up with no physical SPUs. Because of this, the spu_run
syscall will block forever.
This change removes the unnecessary initialization in spu_sched_init.
Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
spufs.h now has two enums for the sched_flags leading to identical
values for SPU_SCHED_WAS_ACTIVE and SPU_SCHED_NOTIFY_ACTIVE. Merge
them into a single enum as they were in the IBM development tree.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix a mismerge in commit 8b6f50ef1d:
"spufs: make signal-notification files readonly for NOSCHED contexts",
where structs got duplicated.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Reading from the signal{1,2} files requires a spu_acquire_saved, so make these
files write-only for contexts created with SPU_CREATE_NOSCHED.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This sorts out the various lists and related locks in the spu code.
In detail:
- the per-node free_spus and active_list are gone. Instead struct spu
gained an alloc_state member telling whether the spu is free or not
- the per-node spus array is now locked by a per-node mutex, which
takes over from the global spu_lock and the per-node active_mutex
- the spu_alloc* and spu_free function are gone as the state change is
now done inline in the spufs code. This allows some more sharing of
code for the affinity vs normal case and more efficient locking
- some little refactoring in the affinity code for this locking scheme
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
From: Maynard Johnson <mpjohn@us.ibm.com>
This patch updates the existing arch/powerpc/oprofile/op_model_cell.c
to add in the SPU profiling capabilities. In addition, a 'cell' subdirectory
was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code.
Exports spu_set_profile_private_kref and spu_get_profile_private_kref which
are used by OProfile to store private profile information in spufs data
structures.
Also incorporated several fixes from other patches (rrn). Check pointer
returned from kzalloc. Eliminated unnecessary cast. Better error
handling and cleanup in the related area. 64-bit unsigned long parameter
was being demoted to 32-bit unsigned int and eventually promoted back to
unsigned long.
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
From: Maynard Johnson <mpjohn@us.ibm.com>
This patch adds to the capability of spu_switch_event_register so that
the caller is also notified of currently active SPU tasks.
Exports spu_switch_event_register and spu_switch_event_unregister so
that OProfile can get access to the notifications provided.
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Sort out the locking mess in spu_base and document the current rules.
As an added benefit spu_alloc* and spu_free don't block anymore.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch links spus according to their physical position using
information provided by the firmware through a special vicinity
device-tree property. This property is present in current version
of Malta firmware.
Example of vicinity properties for a node in Malta:
Node: Vicinity property contains phandles of:
spe@0 [ spe@100000 , mic-tm@50a000 ]
spe@100000 [ spe@0 , spe@200000 ]
spe@200000 [ spe@100000 , spe@300000 ]
spe@300000 [ spe@200000 , bif0@512000 ]
spe@80000 [ spe@180000 , mic-tm@50a000 ]
spe@180000 [ spe@80000 , spe@280000 ]
spe@280000 [ spe@180000 , spe@380000 ]
spe@380000 [ spe@280000 , bif0@512000 ]
Only spe@* have a vicinity property (e.g., bif0@512000 and
mic-tm@50a000 do not have it).
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch makes the scheduller honor affinity information for each
context being scheduled. If the context has no affinity information,
behaviour is unchanged. If there are affinity information, context is
schedulled to be run on the exact spu recommended by the affinity
placement algorithm.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch provides the spu affinity placement logic for the spufs scheduler.
Each time a gang is going to be scheduled, the placement of a reference
context is defined. The placement of all other contexts with affinity from
the gang is defined based on this reference context location and on a
precomputed displacement offset.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>