Merge branches 'sh/g3-prep' and 'sh/stable-updates'

This commit is contained in:
Paul Mundt 2009-12-24 15:16:41 +09:00
commit 17eb9d6282
334 changed files with 11050 additions and 8482 deletions

View File

@ -21,25 +21,27 @@ Contact: Alan Stern <stern@rowland.harvard.edu>
Description:
Each USB device directory will contain a file named
power/level. This file holds a power-level setting for
the device, one of "on", "auto", or "suspend".
the device, either "on" or "auto".
"on" means that the device is not allowed to autosuspend,
although normal suspends for system sleep will still
be honored. "auto" means the device will autosuspend
and autoresume in the usual manner, according to the
capabilities of its driver. "suspend" means the device
is forced into a suspended state and it will not autoresume
in response to I/O requests. However remote-wakeup requests
from the device may still be enabled (the remote-wakeup
setting is controlled separately by the power/wakeup
attribute).
capabilities of its driver.
During normal use, devices should be left in the "auto"
level. The other levels are meant for administrative uses.
level. The "on" level is meant for administrative uses.
If you want to suspend a device immediately but leave it
free to wake up in response to I/O requests, you should
write "0" to power/autosuspend.
Device not capable of proper suspend and resume should be
left in the "on" level. Although the USB spec requires
devices to support suspend/resume, many of them do not.
In fact so many don't that by default, the USB core
initializes all non-hub devices in the "on" level. Some
drivers may change this setting when they are bound.
What: /sys/bus/usb/devices/.../power/persist
Date: May 2007
KernelVersion: 2.6.23

View File

@ -226,5 +226,5 @@ struct driver_attribute driver_attr_debug;
This can then be used to add and remove the attribute from the
driver's directory using:
int driver_create_file(struct device_driver *, struct driver_attribute *);
void driver_remove_file(struct device_driver *, struct driver_attribute *);
int driver_create_file(struct device_driver *, const struct driver_attribute *);
void driver_remove_file(struct device_driver *, const struct driver_attribute *);

View File

@ -91,8 +91,8 @@ struct device_attribute {
const char *buf, size_t count);
};
int device_create_file(struct device *, struct device_attribute *);
void device_remove_file(struct device *, struct device_attribute *);
int device_create_file(struct device *, const struct device_attribute *);
void device_remove_file(struct device *, const struct device_attribute *);
It also defines this helper for defining device attributes:
@ -316,8 +316,8 @@ DEVICE_ATTR(_name, _mode, _show, _store);
Creation/Removal:
int device_create_file(struct device *device, struct device_attribute * attr);
void device_remove_file(struct device * dev, struct device_attribute * attr);
int device_create_file(struct device *dev, const struct device_attribute * attr);
void device_remove_file(struct device *dev, const struct device_attribute * attr);
- bus drivers (include/linux/device.h)
@ -358,7 +358,7 @@ DRIVER_ATTR(_name, _mode, _show, _store)
Creation/Removal:
int driver_create_file(struct device_driver *, struct driver_attribute *);
void driver_remove_file(struct device_driver *, struct driver_attribute *);
int driver_create_file(struct device_driver *, const struct driver_attribute *);
void driver_remove_file(struct device_driver *, const struct driver_attribute *);

View File

@ -42,80 +42,81 @@ struct dev_pm_ops {
...
};
The ->runtime_suspend() callback is executed by the PM core for the bus type of
the device being suspended. The bus type's callback is then _entirely_
_responsible_ for handling the device as appropriate, which may, but need not
include executing the device driver's own ->runtime_suspend() callback (from the
PM core's point of view it is not necessary to implement a ->runtime_suspend()
callback in a device driver as long as the bus type's ->runtime_suspend() knows
what to do to handle the device).
The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks are
executed by the PM core for either the bus type, or device type (if the bus
type's callback is not defined), or device class (if the bus type's and device
type's callbacks are not defined) of given device. The bus type, device type
and device class callbacks are referred to as subsystem-level callbacks in what
follows.
* Once the bus type's ->runtime_suspend() callback has completed successfully
The subsystem-level suspend callback is _entirely_ _responsible_ for handling
the suspend of the device as appropriate, which may, but need not include
executing the device driver's own ->runtime_suspend() callback (from the
PM core's point of view it is not necessary to implement a ->runtime_suspend()
callback in a device driver as long as the subsystem-level suspend callback
knows what to do to handle the device).
* Once the subsystem-level suspend callback has completed successfully
for given device, the PM core regards the device as suspended, which need
not mean that the device has been put into a low power state. It is
supposed to mean, however, that the device will not process data and will
not communicate with the CPU(s) and RAM until its bus type's
->runtime_resume() callback is executed for it. The run-time PM status of
a device after successful execution of its bus type's ->runtime_suspend()
callback is 'suspended'.
not communicate with the CPU(s) and RAM until the subsystem-level resume
callback is executed for it. The run-time PM status of a device after
successful execution of the subsystem-level suspend callback is 'suspended'.
* If the bus type's ->runtime_suspend() callback returns -EBUSY or -EAGAIN,
the device's run-time PM status is supposed to be 'active', which means that
the device _must_ be fully operational afterwards.
* If the subsystem-level suspend callback returns -EBUSY or -EAGAIN,
the device's run-time PM status is 'active', which means that the device
_must_ be fully operational afterwards.
* If the bus type's ->runtime_suspend() callback returns an error code
different from -EBUSY or -EAGAIN, the PM core regards this as a fatal
error and will refuse to run the helper functions described in Section 4
for the device, until the status of it is directly set either to 'active'
or to 'suspended' (the PM core provides special helper functions for this
purpose).
* If the subsystem-level suspend callback returns an error code different
from -EBUSY or -EAGAIN, the PM core regards this as a fatal error and will
refuse to run the helper functions described in Section 4 for the device,
until the status of it is directly set either to 'active', or to 'suspended'
(the PM core provides special helper functions for this purpose).
In particular, if the driver requires remote wakeup capability for proper
functioning and device_run_wake() returns 'false' for the device, then
->runtime_suspend() should return -EBUSY. On the other hand, if
device_run_wake() returns 'true' for the device and the device is put
into a low power state during the execution of its bus type's
->runtime_suspend(), it is expected that remote wake-up (i.e. hardware mechanism
allowing the device to request a change of its power state, such as PCI PME)
will be enabled for the device. Generally, remote wake-up should be enabled
for all input devices put into a low power state at run time.
In particular, if the driver requires remote wake-up capability (i.e. hardware
mechanism allowing the device to request a change of its power state, such as
PCI PME) for proper functioning and device_run_wake() returns 'false' for the
device, then ->runtime_suspend() should return -EBUSY. On the other hand, if
device_run_wake() returns 'true' for the device and the device is put into a low
power state during the execution of the subsystem-level suspend callback, it is
expected that remote wake-up will be enabled for the device. Generally, remote
wake-up should be enabled for all input devices put into a low power state at
run time.
The ->runtime_resume() callback is executed by the PM core for the bus type of
the device being woken up. The bus type's callback is then _entirely_
_responsible_ for handling the device as appropriate, which may, but need not
include executing the device driver's own ->runtime_resume() callback (from the
PM core's point of view it is not necessary to implement a ->runtime_resume()
callback in a device driver as long as the bus type's ->runtime_resume() knows
what to do to handle the device).
The subsystem-level resume callback is _entirely_ _responsible_ for handling the
resume of the device as appropriate, which may, but need not include executing
the device driver's own ->runtime_resume() callback (from the PM core's point of
view it is not necessary to implement a ->runtime_resume() callback in a device
driver as long as the subsystem-level resume callback knows what to do to handle
the device).
* Once the bus type's ->runtime_resume() callback has completed successfully,
the PM core regards the device as fully operational, which means that the
device _must_ be able to complete I/O operations as needed. The run-time
PM status of the device is then 'active'.
* Once the subsystem-level resume callback has completed successfully, the PM
core regards the device as fully operational, which means that the device
_must_ be able to complete I/O operations as needed. The run-time PM status
of the device is then 'active'.
* If the bus type's ->runtime_resume() callback returns an error code, the PM
core regards this as a fatal error and will refuse to run the helper
functions described in Section 4 for the device, until its status is
directly set either to 'active' or to 'suspended' (the PM core provides
special helper functions for this purpose).
* If the subsystem-level resume callback returns an error code, the PM core
regards this as a fatal error and will refuse to run the helper functions
described in Section 4 for the device, until its status is directly set
either to 'active' or to 'suspended' (the PM core provides special helper
functions for this purpose).
The ->runtime_idle() callback is executed by the PM core for the bus type of
given device whenever the device appears to be idle, which is indicated to the
PM core by two counters, the device's usage counter and the counter of 'active'
children of the device.
The subsystem-level idle callback is executed by the PM core whenever the device
appears to be idle, which is indicated to the PM core by two counters, the
device's usage counter and the counter of 'active' children of the device.
* If any of these counters is decreased using a helper function provided by
the PM core and it turns out to be equal to zero, the other counter is
checked. If that counter also is equal to zero, the PM core executes the
device bus type's ->runtime_idle() callback (with the device as an
argument).
subsystem-level idle callback with the device as an argument.
The action performed by a bus type's ->runtime_idle() callback is totally
dependent on the bus type in question, but the expected and recommended action
is to check if the device can be suspended (i.e. if all of the conditions
necessary for suspending the device are satisfied) and to queue up a suspend
request for the device in that case. The value returned by this callback is
ignored by the PM core.
The action performed by a subsystem-level idle callback is totally dependent on
the subsystem in question, but the expected and recommended action is to check
if the device can be suspended (i.e. if all of the conditions necessary for
suspending the device are satisfied) and to queue up a suspend request for the
device in that case. The value returned by this callback is ignored by the PM
core.
The helper functions provided by the PM core, described in Section 4, guarantee
that the following constraints are met with respect to the bus type's run-time
@ -238,41 +239,41 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
removing the device from device hierarchy
int pm_runtime_idle(struct device *dev);
- execute ->runtime_idle() for the device's bus type; returns 0 on success
or error code on failure, where -EINPROGRESS means that ->runtime_idle()
is already being executed
- execute the subsystem-level idle callback for the device; returns 0 on
success or error code on failure, where -EINPROGRESS means that
->runtime_idle() is already being executed
int pm_runtime_suspend(struct device *dev);
- execute ->runtime_suspend() for the device's bus type; returns 0 on
- execute the subsystem-level suspend callback for the device; returns 0 on
success, 1 if the device's run-time PM status was already 'suspended', or
error code on failure, where -EAGAIN or -EBUSY means it is safe to attempt
to suspend the device again in future
int pm_runtime_resume(struct device *dev);
- execute ->runtime_resume() for the device's bus type; returns 0 on
- execute the subsystem-leve resume callback for the device; returns 0 on
success, 1 if the device's run-time PM status was already 'active' or
error code on failure, where -EAGAIN means it may be safe to attempt to
resume the device again in future, but 'power.runtime_error' should be
checked additionally
int pm_request_idle(struct device *dev);
- submit a request to execute ->runtime_idle() for the device's bus type
(the request is represented by a work item in pm_wq); returns 0 on success
or error code if the request has not been queued up
- submit a request to execute the subsystem-level idle callback for the
device (the request is represented by a work item in pm_wq); returns 0 on
success or error code if the request has not been queued up
int pm_schedule_suspend(struct device *dev, unsigned int delay);
- schedule the execution of ->runtime_suspend() for the device's bus type
in future, where 'delay' is the time to wait before queuing up a suspend
work item in pm_wq, in milliseconds (if 'delay' is zero, the work item is
queued up immediately); returns 0 on success, 1 if the device's PM
- schedule the execution of the subsystem-level suspend callback for the
device in future, where 'delay' is the time to wait before queuing up a
suspend work item in pm_wq, in milliseconds (if 'delay' is zero, the work
item is queued up immediately); returns 0 on success, 1 if the device's PM
run-time status was already 'suspended', or error code if the request
hasn't been scheduled (or queued up if 'delay' is 0); if the execution of
->runtime_suspend() is already scheduled and not yet expired, the new
value of 'delay' will be used as the time to wait
int pm_request_resume(struct device *dev);
- submit a request to execute ->runtime_resume() for the device's bus type
(the request is represented by a work item in pm_wq); returns 0 on
- submit a request to execute the subsystem-level resume callback for the
device (the request is represented by a work item in pm_wq); returns 0 on
success, 1 if the device's run-time PM status was already 'active', or
error code if the request hasn't been queued up
@ -303,12 +304,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
run-time PM callbacks described in Section 2
int pm_runtime_disable(struct device *dev);
- prevent the run-time PM helper functions from running the device bus
type's run-time PM callbacks, make sure that all of the pending run-time
PM operations on the device are either completed or canceled; returns
1 if there was a resume request pending and it was necessary to execute
->runtime_resume() for the device's bus type to satisfy that request,
otherwise 0 is returned
- prevent the run-time PM helper functions from running subsystem-level
run-time PM callbacks for the device, make sure that all of the pending
run-time PM operations on the device are either completed or canceled;
returns 1 if there was a resume request pending and it was necessary to
execute the subsystem-level resume callback for the device to satisfy that
request, otherwise 0 is returned
void pm_suspend_ignore_children(struct device *dev, bool enable);
- set/unset the power.ignore_children flag of the device
@ -378,5 +379,55 @@ pm_runtime_suspend() or pm_runtime_idle() or their asynchronous counterparts,
they will fail returning -EAGAIN, because the device's usage counter is
incremented by the core before executing ->probe() and ->remove(). Still, it
may be desirable to suspend the device as soon as ->probe() or ->remove() has
finished, so the PM core uses pm_runtime_idle_sync() to invoke the device bus
type's ->runtime_idle() callback at that time.
finished, so the PM core uses pm_runtime_idle_sync() to invoke the
subsystem-level idle callback for the device at that time.
6. Run-time PM and System Sleep
Run-time PM and system sleep (i.e., system suspend and hibernation, also known
as suspend-to-RAM and suspend-to-disk) interact with each other in a couple of
ways. If a device is active when a system sleep starts, everything is
straightforward. But what should happen if the device is already suspended?
The device may have different wake-up settings for run-time PM and system sleep.
For example, remote wake-up may be enabled for run-time suspend but disallowed
for system sleep (device_may_wakeup(dev) returns 'false'). When this happens,
the subsystem-level system suspend callback is responsible for changing the
device's wake-up setting (it may leave that to the device driver's system
suspend routine). It may be necessary to resume the device and suspend it again
in order to do so. The same is true if the driver uses different power levels
or other settings for run-time suspend and system sleep.
During system resume, devices generally should be brought back to full power,
even if they were suspended before the system sleep began. There are several
reasons for this, including:
* The device might need to switch power levels, wake-up settings, etc.
* Remote wake-up events might have been lost by the firmware.
* The device's children may need the device to be at full power in order
to resume themselves.
* The driver's idea of the device state may not agree with the device's
physical state. This can happen during resume from hibernation.
* The device might need to be reset.
* Even though the device was suspended, if its usage counter was > 0 then most
likely it would need a run-time resume in the near future anyway.
* Always going back to full power is simplest.
If the device was suspended before the sleep began, then its run-time PM status
will have to be updated to reflect the actual post-system sleep status. The way
to do this is:
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The PM core always increments the run-time usage counter before calling the
->prepare() callback and decrements it after calling the ->complete() callback.
Hence disabling run-time PM temporarily like this will not cause any run-time
suspend callbacks to be lost.

View File

@ -0,0 +1,42 @@
* OpenPIC and its interrupt numbers on Freescale's e500/e600 cores
The OpenPIC specification does not specify which interrupt source has to
become which interrupt number. This is up to the software implementation
of the interrupt controller. The only requirement is that every
interrupt source has to have an unique interrupt number / vector number.
To accomplish this the current implementation assigns the number zero to
the first source, the number one to the second source and so on until
all interrupt sources have their unique number.
Usually the assigned vector number equals the interrupt number mentioned
in the documentation for a given core / CPU. This is however not true
for the e500 cores (MPC85XX CPUs) where the documentation distinguishes
between internal and external interrupt sources and starts counting at
zero for both of them.
So what to write for external interrupt source X or internal interrupt
source Y into the device tree? Here is an example:
The memory map for the interrupt controller in the MPC8544[0] shows,
that the first interrupt source starts at 0x5_0000 (PIC Register Address
Map-Interrupt Source Configuration Registers). This source becomes the
number zero therefore:
External interrupt 0 = interrupt number 0
External interrupt 1 = interrupt number 1
External interrupt 2 = interrupt number 2
...
Every interrupt number allocates 0x20 bytes register space. So to get
its number it is sufficient to shift the lower 16bits to right by five.
So for the external interrupt 10 we have:
0x0140 >> 5 = 10
After the external sources, the internal sources follow. The in core I2C
controller on the MPC8544 for instance has the internal source number
27. Oo obtain its interrupt number we take the lower 16bits of its memory
address (0x5_0560) and shift it right:
0x0560 >> 5 = 43
Therefore the I2C device node for the MPC8544 CPU has to have the
interrupt number 43 specified in the device tree.
[0] MPC8544E PowerQUICCTM III, Integrated Host Processor Family Reference Manual
MPC8544ERM Rev. 1 10/2007

View File

@ -403,4 +403,5 @@ STAC9872
Cirrus Logic CS4206/4207
========================
mbp55 MacBook Pro 5,5
imac27 IMac 27 Inch
auto BIOS setup (default)

View File

@ -26,13 +26,33 @@ Procedure for submitting patches to the -stable tree:
- Send the patch, after verifying that it follows the above rules, to
stable@kernel.org.
- To have the patch automatically included in the stable tree, add the
the tag
Cc: stable@kernel.org
in the sign-off area. Once the patch is merged it will be applied to
the stable tree without anything else needing to be done by the author
or subsystem maintainer.
- If the patch requires other patches as prerequisites which can be
cherry-picked than this can be specified in the following format in
the sign-off area:
Cc: <stable@kernel.org> # .32.x: a1f84a3: sched: Check for idle
Cc: <stable@kernel.org> # .32.x: 1b9508f: sched: Rate-limit newidle
Cc: <stable@kernel.org> # .32.x: fd21073: sched: Fix affinity logic
Cc: <stable@kernel.org> # .32.x
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The tag sequence has the meaning of:
git cherry-pick a1f84a3
git cherry-pick 1b9508f
git cherry-pick fd21073
git cherry-pick <this commit>
- The sender will receive an ACK when the patch has been accepted into the
queue, or a NAK if the patch is rejected. This response might take a few
days, according to the developer's schedules.
- If accepted, the patch will be added to the -stable queue, for review by
other developers and by the relevant subsystem maintainer.
- If the stable@kernel.org address is added to a patch, when it goes into
Linus's tree it will automatically be emailed to the stable team.
- Security patches should not be sent to this alias, but instead to the
documented security@kernel.org address.

View File

@ -1,7 +1,7 @@
Subsystem Trace Points: kmem
The tracing system kmem captures events related to object and page allocation
within the kernel. Broadly speaking there are four major subheadings.
The kmem tracing system captures events related to object and page allocation
within the kernel. Broadly speaking there are five major subheadings.
o Slab allocation of small objects of unknown type (kmalloc)
o Slab allocation of small objects of known type
@ -9,7 +9,7 @@ within the kernel. Broadly speaking there are four major subheadings.
o Per-CPU Allocator Activity
o External Fragmentation
This document will describe what each of the tracepoints are and why they
This document describes what each of the tracepoints is and why they
might be useful.
1. Slab allocation of small objects of unknown type
@ -34,7 +34,7 @@ kmem_cache_free call_site=%lx ptr=%p
These events are similar in usage to the kmalloc-related events except that
it is likely easier to pin the event down to a specific cache. At the time
of writing, no information is available on what slab is being allocated from,
but the call_site can usually be used to extrapolate that information
but the call_site can usually be used to extrapolate that information.
3. Page allocation
==================
@ -80,9 +80,9 @@ event indicating whether it is for a percpu_refill or not.
When the per-CPU list is too full, a number of pages are freed, each one
which triggers a mm_page_pcpu_drain event.
The individual nature of the events are so that pages can be tracked
The individual nature of the events is so that pages can be tracked
between allocation and freeing. A number of drain or refill pages that occur
consecutively imply the zone->lock being taken once. Large amounts of PCP
consecutively imply the zone->lock being taken once. Large amounts of per-CPU
refills and drains could imply an imbalance between CPUs where too much work
is being concentrated in one place. It could also indicate that the per-CPU
lists should be a larger size. Finally, large amounts of refills on one CPU
@ -102,6 +102,6 @@ is important.
Large numbers of this event implies that memory is fragmenting and
high-order allocations will start failing at some time in the future. One
means of reducing the occurange of this event is to increase the size of
means of reducing the occurrence of this event is to increase the size of
min_free_kbytes in increments of 3*pageblock_size*nr_online_nodes where
pageblock_size is usually the size of the default hugepage size.

View File

@ -71,12 +71,10 @@ being accessed through sysfs, then it definitely is idle.
Forms of dynamic PM
-------------------
Dynamic suspends can occur in two ways: manual and automatic.
"Manual" means that the user has told the kernel to suspend a device,
whereas "automatic" means that the kernel has decided all by itself to
suspend a device. Automatic suspend is called "autosuspend" for
short. In general, a device won't be autosuspended unless it has been
idle for some minimum period of time, the so-called idle-delay time.
Dynamic suspends occur when the kernel decides to suspend an idle
device. This is called "autosuspend" for short. In general, a device
won't be autosuspended unless it has been idle for some minimum period
of time, the so-called idle-delay time.
Of course, nothing the kernel does on its own initiative should
prevent the computer or its devices from working properly. If a
@ -96,10 +94,11 @@ idle.
We can categorize power management events in two broad classes:
external and internal. External events are those triggered by some
agent outside the USB stack: system suspend/resume (triggered by
userspace), manual dynamic suspend/resume (also triggered by
userspace), and remote wakeup (triggered by the device). Internal
events are those triggered within the USB stack: autosuspend and
autoresume.
userspace), manual dynamic resume (also triggered by userspace), and
remote wakeup (triggered by the device). Internal events are those
triggered within the USB stack: autosuspend and autoresume. Note that
all dynamic suspend events are internal; external agents are not
allowed to issue dynamic suspends.
The user interface for dynamic PM
@ -145,9 +144,9 @@ relevant attribute files are: wakeup, level, and autosuspend.
number of seconds the device should remain idle before
the kernel will autosuspend it (the idle-delay time).
The default is 2. 0 means to autosuspend as soon as
the device becomes idle, and -1 means never to
autosuspend. You can write a number to the file to
change the autosuspend idle-delay time.
the device becomes idle, and negative values mean
never to autosuspend. You can write a number to the
file to change the autosuspend idle-delay time.
Writing "-1" to power/autosuspend and writing "on" to power/level do
essentially the same thing -- they both prevent the device from being
@ -377,9 +376,9 @@ the device hasn't been idle for long enough, a delayed workqueue
routine is automatically set up to carry out the operation when the
autosuspend idle-delay has expired.
Autoresume attempts also can fail. This will happen if power/level is
set to "suspend" or if the device doesn't manage to resume properly.
Unlike autosuspend, there's no delay for an autoresume.
Autoresume attempts also can fail, although failure would mean that
the device is no longer present or operating properly. Unlike
autosuspend, there's no delay for an autoresume.
Other parts of the driver interface
@ -527,13 +526,3 @@ succeed, it may still remain active and thus cause the system to
resume as soon as the system suspend is complete. Or the remote
wakeup may fail and get lost. Which outcome occurs depends on timing
and on the hardware and firmware design.
More interestingly, a device might undergo a manual resume or
autoresume during system suspend. With current kernels this shouldn't
happen, because manual resumes must be initiated by userspace and
autoresumes happen in response to I/O requests, but all user processes
and I/O should be quiescent during a system suspend -- thanks to the
freezer. However there are plans to do away with the freezer, which
would mean these things would become possible. If and when this comes
about, the USB core will carefully arrange matters so that either type
of resume will block until the entire system has resumed.

View File

@ -1402,6 +1402,8 @@ L: linux-usb@vger.kernel.org
S: Supported
F: Documentation/usb/WUSB-Design-overview.txt
F: Documentation/usb/wusb-cbaf
F: drivers/usb/host/hwa-hc.c
F: drivers/usb/host/whci/
F: drivers/usb/wusbcore/
F: include/linux/usb/wusb*
@ -3677,7 +3679,7 @@ F: include/linux/isicom.h
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
M: Felipe Balbi <felipe.balbi@nokia.com>
L: linux-usb@vger.kernel.org
T: git git://gitorious.org/musb/mainline.git
T: git git://gitorious.org/usb/usb.git
S: Maintained
F: drivers/usb/musb/
@ -5430,7 +5432,10 @@ ULTRA-WIDEBAND (UWB) SUBSYSTEM:
M: David Vrabel <david.vrabel@csr.com>
L: linux-usb@vger.kernel.org
S: Supported
F: drivers/uwb/*
F: drivers/uwb/
X: drivers/uwb/wlp/
X: drivers/uwb/i1480/i1480u-wlp/
X: drivers/uwb/i1480/i1480-wlp.h
F: include/linux/uwb.h
F: include/linux/uwb/
@ -5943,9 +5948,12 @@ W: http://linuxwimax.org
WIMEDIA LLC PROTOCOL (WLP) SUBSYSTEM
M: David Vrabel <david.vrabel@csr.com>
L: netdev@vger.kernel.org
S: Maintained
F: include/linux/wlp.h
F: drivers/uwb/wlp/
F: drivers/uwb/i1480/i1480u-wlp/
F: drivers/uwb/i1480/i1480-wlp.h
WISTRON LAPTOP BUTTON DRIVER
M: Miloslav Trmac <mitr@volny.cz>

View File

@ -108,12 +108,19 @@ CPR0: cpr {
dcr-reg = <0x00c 0x002>;
};
MQ0: mq {
compatible = "ibm,mq-440spe";
dcr-reg = <0x040 0x020>;
};
plb {
compatible = "ibm,plb-440spe", "ibm,plb-440gp", "ibm,plb4";
#address-cells = <2>;
#size-cells = <1>;
/* addr-child addr-parent size */
ranges = <0x4 0xe0000000 0x4 0xe0000000 0x20000000
ranges = <0x4 0x00100000 0x4 0x00100000 0x00001000
0x4 0x00200000 0x4 0x00200000 0x00000400
0x4 0xe0000000 0x4 0xe0000000 0x20000000
0xc 0x00000000 0xc 0x00000000 0x20000000
0xd 0x00000000 0xd 0x00000000 0x80000000
0xd 0x80000000 0xd 0x80000000 0x80000000
@ -400,6 +407,49 @@ PCIE2: pciex@d40000000 {
0x0 0x0 0x0 0x3 &UIC3 0xa 0x4 /* swizzled int C */
0x0 0x0 0x0 0x4 &UIC3 0xb 0x4 /* swizzled int D */>;
};
I2O: i2o@400100000 {
compatible = "ibm,i2o-440spe";
reg = <0x00000004 0x00100000 0x100>;
dcr-reg = <0x060 0x020>;
};
DMA0: dma0@400100100 {
compatible = "ibm,dma-440spe";
cell-index = <0>;
reg = <0x00000004 0x00100100 0x100>;
dcr-reg = <0x060 0x020>;
interrupt-parent = <&DMA0>;
interrupts = <0 1>;
#interrupt-cells = <1>;
#address-cells = <0>;
#size-cells = <0>;
interrupt-map = <
0 &UIC0 0x14 4
1 &UIC1 0x16 4>;
};
DMA1: dma1@400100200 {
compatible = "ibm,dma-440spe";
cell-index = <1>;
reg = <0x00000004 0x00100200 0x100>;
dcr-reg = <0x060 0x020>;
interrupt-parent = <&DMA1>;
interrupts = <0 1>;
#interrupt-cells = <1>;
#address-cells = <0>;
#size-cells = <0>;
interrupt-map = <
0 &UIC0 0x16 4
1 &UIC1 0x16 4>;
};
xor-accel@400200000 {
compatible = "amcc,xor-accelerator";
reg = <0x00000004 0x00200000 0x400>;
interrupt-parent = <&UIC1>;
interrupts = <0x1f 4>;
};
};
chosen {

View File

@ -204,6 +204,7 @@ enet0: ethernet@24000 {
interrupt-parent = <&ipic>;
tbi-handle = <&tbi0>;
phy-handle = < &phy0 >;
fsl,magic-packet;
mdio@520 {
#address-cells = <1>;
@ -246,6 +247,7 @@ enet1: ethernet@25000 {
interrupt-parent = <&ipic>;
tbi-handle = <&tbi1>;
phy-handle = < &phy1 >;
fsl,magic-packet;
mdio@520 {
#address-cells = <1>;
@ -309,6 +311,22 @@ sata@19000 {
interrupt-parent = <&ipic>;
};
gtm1: timer@500 {
compatible = "fsl,mpc8315-gtm", "fsl,gtm";
reg = <0x500 0x100>;
interrupts = <90 8 78 8 84 8 72 8>;
interrupt-parent = <&ipic>;
clock-frequency = <133333333>;
};
timer@600 {
compatible = "fsl,mpc8315-gtm", "fsl,gtm";
reg = <0x600 0x100>;
interrupts = <91 8 79 8 85 8 73 8>;
interrupt-parent = <&ipic>;
clock-frequency = <133333333>;
};
/* IPIC
* interrupts cell = <intr #, sense>
* sense values match linux IORESOURCE_IRQ_* defines:
@ -337,6 +355,15 @@ ipic-msi@7c0 {
0x59 0x8>;
interrupt-parent = < &ipic >;
};
pmc: power@b00 {
compatible = "fsl,mpc8315-pmc", "fsl,mpc8313-pmc",
"fsl,mpc8349-pmc";
reg = <0xb00 0x100 0xa00 0x100>;
interrupts = <80 8>;
interrupt-parent = <&ipic>;
fsl,mpc8313-wakeup-timer = <&gtm1>;
};
};
pci0: pci@e0008500 {

View File

@ -63,6 +63,24 @@ wdt@200 {
reg = <0x200 0x100>;
};
gpio1: gpio-controller@c00 {
#gpio-cells = <2>;
compatible = "fsl,mpc8349-gpio";
reg = <0xc00 0x100>;
interrupts = <74 0x8>;
interrupt-parent = <&ipic>;
gpio-controller;
};
gpio2: gpio-controller@d00 {
#gpio-cells = <2>;
compatible = "fsl,mpc8349-gpio";
reg = <0xd00 0x100>;
interrupts = <75 0x8>;
interrupt-parent = <&ipic>;
gpio-controller;
};
i2c@3000 {
#address-cells = <1>;
#size-cells = <0>;
@ -72,6 +90,12 @@ i2c@3000 {
interrupts = <14 0x8>;
interrupt-parent = <&ipic>;
dfsrr;
eeprom: at24@50 {
compatible = "st-micro,24c256";
reg = <0x50>;
};
};
i2c@3100 {
@ -91,6 +115,25 @@ rtc@68 {
interrupt-parent = <&ipic>;
};
pcf1: iexp@38 {
#gpio-cells = <2>;
compatible = "ti,pcf8574a";
reg = <0x38>;
gpio-controller;
};
pcf2: iexp@39 {
#gpio-cells = <2>;
compatible = "ti,pcf8574a";
reg = <0x39>;
gpio-controller;
};
spd: at24@51 {
compatible = "at24,spd";
reg = <0x51>;
};
mcu_pio: mcu@a {
#gpio-cells = <2>;
compatible = "fsl,mc9s08qg8-mpc8349emitx",
@ -275,6 +318,24 @@ ipic: pic@700 {
reg = <0x700 0x100>;
device_type = "ipic";
};
gpio-leds {
compatible = "gpio-leds";
green {
label = "Green";
gpios = <&pcf1 0 1>;
linux,default-trigger = "heartbeat";
};
yellow {
label = "Yellow";
gpios = <&pcf1 1 1>;
/* linux,default-trigger = "heartbeat"; */
default-state = "on";
};
};
};
pci0: pci@e0008500 {
@ -331,7 +392,26 @@ localbus@e0005000 {
compatible = "fsl,mpc8349e-localbus",
"fsl,pq2pro-localbus";
reg = <0xe0005000 0xd8>;
ranges = <0x3 0x0 0xf0000000 0x210>;
ranges = <0x0 0x0 0xfe000000 0x1000000 /* flash */
0x1 0x0 0xf8000000 0x20000 /* VSC 7385 */
0x2 0x0 0xf9000000 0x200000 /* exp slot */
0x3 0x0 0xf0000000 0x210>; /* CF slot */
flash@0,0 {
compatible = "cfi-flash";
reg = <0x0 0x0 0x800000>;
bank-width = <2>;
device-width = <1>;
};
flash@0,800000 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "cfi-flash";
reg = <0x0 0x800000 0x800000>;
bank-width = <2>;
device-width = <1>;
};
pata@3,0 {
compatible = "fsl,mpc8349emitx-pata", "ata-generic";

View File

@ -146,7 +146,7 @@ fpga@2,2000 {
fpga@2,4000 {
compatible = "pika,fpga-sd";
reg = <0x00000002 0x00004000 0x00000A00>;
reg = <0x00000002 0x00004000 0x00004000>;
};
nor@0,0 {

View File

@ -86,7 +86,7 @@ static void ug_putc(char ch)
while (!ug_is_txfifo_ready() && count--)
barrier();
if (count)
if (count >= 0)
ug_raw_putc(ch);
}

View File

@ -757,7 +757,7 @@ CONFIG_SUNGEM=y
# CONFIG_B44 is not set
# CONFIG_ATL2 is not set
CONFIG_NETDEV_1000=y
CONFIG_ACENIC=y
CONFIG_ACENIC=m
CONFIG_ACENIC_OMIT_TIGON_I=y
# CONFIG_DL2K is not set
CONFIG_E1000=y
@ -794,8 +794,8 @@ CONFIG_NETDEV_10000=y
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
CONFIG_TR=y
CONFIG_IBMOL=y
# CONFIG_TR is not set
# CONFIG_IBMOL is not set
# CONFIG_3C359 is not set
# CONFIG_TMS380TR is not set

View File

@ -714,8 +714,8 @@ CONFIG_NETDEV_10000=y
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
CONFIG_TR=y
CONFIG_IBMOL=y
# CONFIG_TR is not set
# CONFIG_IBMOL is not set
# CONFIG_3C359 is not set
# CONFIG_TMS380TR is not set

View File

@ -304,11 +304,11 @@ CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
CONFIG_HZ_100=y
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_HZ=100
CONFIG_SCHED_HRTICK=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
@ -980,7 +980,7 @@ CONFIG_E100=y
# CONFIG_SC92031 is not set
# CONFIG_ATL2 is not set
CONFIG_NETDEV_1000=y
CONFIG_ACENIC=y
CONFIG_ACENIC=m
CONFIG_ACENIC_OMIT_TIGON_I=y
# CONFIG_DL2K is not set
CONFIG_E1000=y
@ -1023,8 +1023,8 @@ CONFIG_PASEMI_MAC=y
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
CONFIG_TR=y
CONFIG_IBMOL=y
# CONFIG_TR is not set
# CONFIG_IBMOL is not set
# CONFIG_3C359 is not set
# CONFIG_TMS380TR is not set
@ -1863,7 +1863,7 @@ CONFIG_HFSPLUS_FS=m
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=y
CONFIG_CRAMFS=m
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set

View File

@ -1008,8 +1008,8 @@ CONFIG_IXGB=m
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
# CONFIG_BE2NET is not set
CONFIG_TR=y
CONFIG_IBMOL=y
# CONFIG_TR is not set
# CONFIG_IBMOL is not set
# CONFIG_3C359 is not set
# CONFIG_TMS380TR is not set
CONFIG_WLAN=y

View File

@ -230,11 +230,11 @@ CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
CONFIG_HZ_100=y
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_HZ=100
CONFIG_SCHED_HRTICK=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
@ -796,7 +796,7 @@ CONFIG_E100=y
# CONFIG_NET_POCKET is not set
# CONFIG_ATL2 is not set
CONFIG_NETDEV_1000=y
CONFIG_ACENIC=y
CONFIG_ACENIC=m
CONFIG_ACENIC_OMIT_TIGON_I=y
# CONFIG_DL2K is not set
CONFIG_E1000=y
@ -834,8 +834,8 @@ CONFIG_S2IO=m
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
CONFIG_TR=y
CONFIG_IBMOL=y
# CONFIG_TR is not set
# CONFIG_IBMOL is not set
# CONFIG_3C359 is not set
# CONFIG_TMS380TR is not set
@ -1494,7 +1494,7 @@ CONFIG_CONFIGFS_FS=m
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=y
CONFIG_CRAMFS=m
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set

View File

@ -68,7 +68,7 @@
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (0), "i" (sizeof(struct bug_entry))); \
for(;;) ; \
unreachable(); \
} while (0)
#define BUG_ON(x) do { \

View File

@ -38,12 +38,9 @@ static inline int gpio_cansleep(unsigned int gpio)
return __gpio_cansleep(gpio);
}
/*
* Not implemented, yet.
*/
static inline int gpio_to_irq(unsigned int gpio)
{
return -ENOSYS;
return __gpio_to_irq(gpio);
}
static inline int irq_to_gpio(unsigned int irq)

View File

@ -642,10 +642,14 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
*/
static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
unsigned int areg, struct pt_regs *regs,
unsigned int flags, unsigned int length)
unsigned int flags, unsigned int length,
unsigned int elsize)
{
char *ptr;
unsigned long *lptr;
int ret = 0;
int sw = 0;
int i, j;
flush_vsx_to_thread(current);
@ -654,19 +658,35 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
else
ptr = (char *) &current->thread.vr[reg - 32];
if (flags & ST)
ret = __copy_to_user(addr, ptr, length);
else {
if (flags & SPLT){
ret = __copy_from_user(ptr, addr, length);
ptr += length;
lptr = (unsigned long *) ptr;
if (flags & SW)
sw = elsize-1;
for (j = 0; j < length; j += elsize) {
for (i = 0; i < elsize; ++i) {
if (flags & ST)
ret |= __put_user(ptr[i^sw], addr + i);
else
ret |= __get_user(ptr[i^sw], addr + i);
}
ret |= __copy_from_user(ptr, addr, length);
ptr += elsize;
addr += elsize;
}
if (flags & U)
regs->gpr[areg] = regs->dar;
if (ret)
if (!ret) {
if (flags & U)
regs->gpr[areg] = regs->dar;
/* Splat load copies the same data to top and bottom 8 bytes */
if (flags & SPLT)
lptr[1] = lptr[0];
/* For 8 byte loads, zero the top 8 bytes */
else if (!(flags & ST) && (8 == length))
lptr[1] = 0;
} else
return -EFAULT;
return 1;
}
#endif
@ -767,16 +787,25 @@ int fix_alignment(struct pt_regs *regs)
#ifdef CONFIG_VSX
if ((instruction & 0xfc00003e) == 0x7c000018) {
/* Additional register addressing bit (64 VSX vs 32 FPR/GPR */
unsigned int elsize;
/* Additional register addressing bit (64 VSX vs 32 FPR/GPR) */
reg |= (instruction & 0x1) << 5;
/* Simple inline decoder instead of a table */
/* VSX has only 8 and 16 byte memory accesses */
nb = 8;
if (instruction & 0x200)
nb = 16;
else if (instruction & 0x080)
nb = 8;
else
nb = 4;
/* Vector stores in little-endian mode swap individual
elements, so process them separately */
elsize = 4;
if (instruction & 0x80)
elsize = 8;
flags = 0;
if (regs->msr & MSR_LE)
flags |= SW;
if (instruction & 0x100)
flags |= ST;
if (instruction & 0x040)
@ -787,7 +816,7 @@ int fix_alignment(struct pt_regs *regs)
nb = 8;
}
PPC_WARN_ALIGNMENT(vsx, regs);
return emulate_vsx(addr, reg, areg, regs, flags, nb);
return emulate_vsx(addr, reg, areg, regs, flags, nb, elsize);
}
#endif
/* A size of 0 indicates an instruction we don't support, with

View File

@ -340,7 +340,7 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
else
def->tlbiel = 0;
DBG(" %d: shift=%02x, sllp=%04x, avpnm=%08x, "
DBG(" %d: shift=%02x, sllp=%04lx, avpnm=%08lx, "
"tlbiel=%d, penc=%d\n",
idx, shift, def->sllp, def->avpnm, def->tlbiel,
def->penc);
@ -663,7 +663,7 @@ static void __init htab_initialize(void)
base = (unsigned long)__va(lmb.memory.region[i].base);
size = lmb.memory.region[i].size;
DBG("creating mapping for region: %lx..%lx (prot: %x)\n",
DBG("creating mapping for region: %lx..%lx (prot: %lx)\n",
base, size, prot);
#ifdef CONFIG_U3_DART
@ -879,7 +879,7 @@ static inline int subpage_protection(struct mm_struct *mm, unsigned long ea)
*/
int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
{
void *pgdir;
pgd_t *pgdir;
unsigned long vsid;
struct mm_struct *mm;
pte_t *ptep;
@ -1025,7 +1025,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
else
#endif /* CONFIG_PPC_HAS_HASH_64K */
{
int spp = subpage_protection(pgdir, ea);
int spp = subpage_protection(mm, ea);
if (access & spp)
rc = -2;
else
@ -1115,7 +1115,7 @@ void flush_hash_page(unsigned long va, real_pte_t pte, int psize, int ssize,
{
unsigned long hash, index, shift, hidx, slot;
DBG_LOW("flush_hash_page(va=%016x)\n", va);
DBG_LOW("flush_hash_page(va=%016lx)\n", va);
pte_iterate_hashed_subpages(pte, psize, va, index, shift) {
hash = hpt_hash(va, shift, ssize);
hidx = __rpte_to_hidx(pte, index);
@ -1123,7 +1123,7 @@ void flush_hash_page(unsigned long va, real_pte_t pte, int psize, int ssize,
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
slot += hidx & _PTEIDX_GROUP_IX;
DBG_LOW(" sub %d: hash=%x, hidx=%x\n", index, slot, hidx);
DBG_LOW(" sub %ld: hash=%lx, hidx=%lx\n", index, slot, hidx);
ppc_md.hpte_invalidate(slot, va, psize, ssize, local);
} pte_iterate_hashed_end();
}

View File

@ -353,7 +353,7 @@ static int __cpuinit mmu_context_cpu_notify(struct notifier_block *self,
read_lock(&tasklist_lock);
for_each_process(p) {
if (p->mm)
cpu_mask_clear_cpu(cpu, mm_cpumask(p->mm));
cpumask_clear_cpu(cpu, mm_cpumask(p->mm));
}
read_unlock(&tasklist_lock);
break;

View File

@ -382,7 +382,7 @@ static int __change_page_attr(struct page *page, pgprot_t prot)
return 0;
if (!get_pteptr(&init_mm, address, &kpte, &kpmd))
return -EINVAL;
set_pte_at(&init_mm, address, kpte, mk_pte(page, prot));
__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
wmb();
#ifdef CONFIG_PPC_STD_MMU
flush_hash_pages(0, address, pmd_val(*kpmd), 1);

View File

@ -32,6 +32,7 @@
#define PMCCR1_NEXT_STATE 0x0C /* Next state for power management */
#define PMCCR1_NEXT_STATE_SHIFT 2
#define PMCCR1_CURR_STATE 0x03 /* Current state for power management*/
#define IMMR_SYSCR_OFFSET 0x100
#define IMMR_RCW_OFFSET 0x900
#define RCW_PCI_HOST 0x80000000
@ -78,6 +79,22 @@ struct mpc83xx_clock {
u32 sccr;
};
struct mpc83xx_syscr {
__be32 sgprl;
__be32 sgprh;
__be32 spridr;
__be32 :32;
__be32 spcr;
__be32 sicrl;
__be32 sicrh;
};
struct mpc83xx_saved {
u32 sicrl;
u32 sicrh;
u32 sccr;
};
struct pmc_type {
int has_deep_sleep;
};
@ -87,6 +104,8 @@ static int has_deep_sleep, deep_sleeping;
static int pmc_irq;
static struct mpc83xx_pmc __iomem *pmc_regs;
static struct mpc83xx_clock __iomem *clock_regs;
static struct mpc83xx_syscr __iomem *syscr_regs;
static struct mpc83xx_saved saved_regs;
static int is_pci_agent, wake_from_pci;
static phys_addr_t immrbase;
static int pci_pm_state;
@ -137,6 +156,20 @@ static irqreturn_t pmc_irq_handler(int irq, void *dev_id)
return ret;
}
static void mpc83xx_suspend_restore_regs(void)
{
out_be32(&syscr_regs->sicrl, saved_regs.sicrl);
out_be32(&syscr_regs->sicrh, saved_regs.sicrh);
out_be32(&clock_regs->sccr, saved_regs.sccr);
}
static void mpc83xx_suspend_save_regs(void)
{
saved_regs.sicrl = in_be32(&syscr_regs->sicrl);
saved_regs.sicrh = in_be32(&syscr_regs->sicrh);
saved_regs.sccr = in_be32(&clock_regs->sccr);
}
static int mpc83xx_suspend_enter(suspend_state_t state)
{
int ret = -EAGAIN;
@ -166,6 +199,8 @@ static int mpc83xx_suspend_enter(suspend_state_t state)
*/
if (deep_sleeping) {
mpc83xx_suspend_save_regs();
out_be32(&pmc_regs->mask, PMCER_ALL);
out_be32(&pmc_regs->config1,
@ -179,6 +214,8 @@ static int mpc83xx_suspend_enter(suspend_state_t state)
in_be32(&pmc_regs->config1) & ~PMCCR1_POWER_OFF);
out_be32(&pmc_regs->mask, PMCER_PMCI);
mpc83xx_suspend_restore_regs();
} else {
out_be32(&pmc_regs->mask, PMCER_PMCI);
@ -194,7 +231,7 @@ static int mpc83xx_suspend_enter(suspend_state_t state)
return ret;
}
static void mpc83xx_suspend_finish(void)
static void mpc83xx_suspend_end(void)
{
deep_sleeping = 0;
}
@ -278,7 +315,7 @@ static struct platform_suspend_ops mpc83xx_suspend_ops = {
.valid = mpc83xx_suspend_valid,
.begin = mpc83xx_suspend_begin,
.enter = mpc83xx_suspend_enter,
.finish = mpc83xx_suspend_finish,
.end = mpc83xx_suspend_end,
};
static int pmc_probe(struct of_device *ofdev,
@ -333,12 +370,23 @@ static int pmc_probe(struct of_device *ofdev,
goto out_pmc;
}
if (has_deep_sleep) {
syscr_regs = ioremap(immrbase + IMMR_SYSCR_OFFSET,
sizeof(*syscr_regs));
if (!syscr_regs) {
ret = -ENOMEM;
goto out_syscr;
}
}
if (is_pci_agent)
mpc83xx_set_agent();
suspend_set_ops(&mpc83xx_suspend_ops);
return 0;
out_syscr:
iounmap(clock_regs);
out_pmc:
iounmap(pmc_regs);
out:

View File

@ -86,7 +86,7 @@ static int mpc8568_fixup_125_clock(struct phy_device *phydev)
scr = phy_read(phydev, MV88E1111_SCR);
if (scr < 0)
return err;
return scr;
err = phy_write(phydev, MV88E1111_SCR, scr | 0x0008);

View File

@ -102,7 +102,7 @@ static int flipper_pic_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hwirq)
{
set_irq_chip_data(virq, h->host_data);
get_irq_desc(virq)->status |= IRQ_LEVEL;
irq_to_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &flipper_pic, handle_level_irq);
return 0;
}

View File

@ -95,7 +95,7 @@ static int hlwd_pic_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hwirq)
{
set_irq_chip_data(virq, h->host_data);
get_irq_desc(virq)->status |= IRQ_LEVEL;
irq_to_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &hlwd_pic, handle_level_irq);
return 0;
}
@ -132,9 +132,9 @@ static void hlwd_pic_irq_cascade(unsigned int cascade_virq,
struct irq_host *irq_host = get_irq_data(cascade_virq);
unsigned int virq;
spin_lock(&desc->lock);
raw_spin_lock(&desc->lock);
desc->chip->mask(cascade_virq); /* IRQ_LEVEL */
spin_unlock(&desc->lock);
raw_spin_unlock(&desc->lock);
virq = __hlwd_pic_get_irq(irq_host);
if (virq != NO_IRQ)
@ -142,11 +142,11 @@ static void hlwd_pic_irq_cascade(unsigned int cascade_virq,
else
pr_err("spurious interrupt!\n");
spin_lock(&desc->lock);
raw_spin_lock(&desc->lock);
desc->chip->ack(cascade_virq); /* IRQ_LEVEL */
if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask)
desc->chip->unmask(cascade_virq);
spin_unlock(&desc->lock);
raw_spin_unlock(&desc->lock);
}
/*

View File

@ -120,7 +120,7 @@ static void ug_putc(char ch)
while (!ug_is_txfifo_ready() && count--)
barrier();
if (count)
if (count >= 0)
ug_raw_putc(ch);
}

View File

@ -855,59 +855,58 @@ static int mf_get_boot_rtc(struct rtc_time *tm)
}
#ifdef CONFIG_PROC_FS
static int proc_mf_dump_cmdline(char *page, char **start, off_t off,
int count, int *eof, void *data)
static int mf_cmdline_proc_show(struct seq_file *m, void *v)
{
int len;
char *p;
char *page, *p;
struct vsp_cmd_data vsp_cmd;
int rc;
dma_addr_t dma_addr;
/* The HV appears to return no more than 256 bytes of command line */
if (off >= 256)
return 0;
if ((off + count) > 256)
count = 256 - off;
dma_addr = iseries_hv_map(page, off + count, DMA_FROM_DEVICE);
if (dma_addr == DMA_ERROR_CODE)
page = kmalloc(256, GFP_KERNEL);
if (!page)
return -ENOMEM;
memset(page, 0, off + count);
dma_addr = iseries_hv_map(page, 256, DMA_FROM_DEVICE);
if (dma_addr == DMA_ERROR_CODE) {
kfree(page);
return -ENOMEM;
}
memset(page, 0, 256);
memset(&vsp_cmd, 0, sizeof(vsp_cmd));
vsp_cmd.cmd = 33;
vsp_cmd.sub_data.kern.token = dma_addr;
vsp_cmd.sub_data.kern.address_type = HvLpDma_AddressType_TceIndex;
vsp_cmd.sub_data.kern.side = (u64)data;
vsp_cmd.sub_data.kern.length = off + count;
vsp_cmd.sub_data.kern.side = (u64)m->private;
vsp_cmd.sub_data.kern.length = 256;
mb();
rc = signal_vsp_instruction(&vsp_cmd);
iseries_hv_unmap(dma_addr, off + count, DMA_FROM_DEVICE);
if (rc)
iseries_hv_unmap(dma_addr, 256, DMA_FROM_DEVICE);
if (rc) {
kfree(page);
return rc;
if (vsp_cmd.result_code != 0)
}
if (vsp_cmd.result_code != 0) {
kfree(page);
return -ENOMEM;
}
p = page;
len = 0;
while (len < (off + count)) {
if ((*p == '\0') || (*p == '\n')) {
if (*p == '\0')
*p = '\n';
p++;
len++;
*eof = 1;
while (p - page < 256) {
if (*p == '\0' || *p == '\n') {
*p = '\n';
break;
}
p++;
len++;
}
if (len < off) {
*eof = 1;
len = 0;
}
return len;
seq_write(m, page, p - page);
kfree(page);
return 0;
}
static int mf_cmdline_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, mf_cmdline_proc_show, PDE(inode)->data);
}
#if 0
@ -962,10 +961,8 @@ static int proc_mf_dump_vmlinux(char *page, char **start, off_t off,
}
#endif
static int proc_mf_dump_side(char *page, char **start, off_t off,
int count, int *eof, void *data)
static int mf_side_proc_show(struct seq_file *m, void *v)
{
int len;
char mf_current_side = ' ';
struct vsp_cmd_data vsp_cmd;
@ -989,21 +986,17 @@ static int proc_mf_dump_side(char *page, char **start, off_t off,
}
}
len = sprintf(page, "%c\n", mf_current_side);
if (len <= (off + count))
*eof = 1;
*start = page + off;
len -= off;
if (len > count)
len = count;
if (len < 0)
len = 0;
return len;
seq_printf(m, "%c\n", mf_current_side);
return 0;
}
static int proc_mf_change_side(struct file *file, const char __user *buffer,
unsigned long count, void *data)
static int mf_side_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, mf_side_proc_show, NULL);
}
static ssize_t mf_side_proc_write(struct file *file, const char __user *buffer,
size_t count, loff_t *pos)
{
char side;
u64 newSide;
@ -1041,6 +1034,15 @@ static int proc_mf_change_side(struct file *file, const char __user *buffer,
return count;
}
static const struct file_operations mf_side_proc_fops = {
.owner = THIS_MODULE,
.open = mf_side_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
.write = mf_side_proc_write,
};
#if 0
static void mf_getSrcHistory(char *buffer, int size)
{
@ -1087,8 +1089,7 @@ static void mf_getSrcHistory(char *buffer, int size)
}
#endif
static int proc_mf_dump_src(char *page, char **start, off_t off,
int count, int *eof, void *data)
static int mf_src_proc_show(struct seq_file *m, void *v)
{
#if 0
int len;
@ -1109,8 +1110,13 @@ static int proc_mf_dump_src(char *page, char **start, off_t off,
#endif
}
static int proc_mf_change_src(struct file *file, const char __user *buffer,
unsigned long count, void *data)
static int mf_src_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, mf_src_proc_show, NULL);
}
static ssize_t mf_src_proc_write(struct file *file, const char __user *buffer,
size_t count, loff_t *pos)
{
char stkbuf[10];
@ -1135,9 +1141,19 @@ static int proc_mf_change_src(struct file *file, const char __user *buffer,
return count;
}
static int proc_mf_change_cmdline(struct file *file, const char __user *buffer,
unsigned long count, void *data)
static const struct file_operations mf_src_proc_fops = {
.owner = THIS_MODULE,
.open = mf_src_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
.write = mf_src_proc_write,
};
static ssize_t mf_cmdline_proc_write(struct file *file, const char __user *buffer,
size_t count, loff_t *pos)
{
void *data = PDE(file->f_path.dentry->d_inode)->data;
struct vsp_cmd_data vsp_cmd;
dma_addr_t dma_addr;
char *page;
@ -1172,6 +1188,15 @@ static int proc_mf_change_cmdline(struct file *file, const char __user *buffer,
return ret;
}
static const struct file_operations mf_cmdline_proc_fops = {
.owner = THIS_MODULE,
.open = mf_cmdline_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
.write = mf_cmdline_proc_write,
};
static ssize_t proc_mf_change_vmlinux(struct file *file,
const char __user *buf,
size_t count, loff_t *ppos)
@ -1246,12 +1271,10 @@ static int __init mf_proc_init(void)
if (!mf)
return 1;
ent = create_proc_entry("cmdline", S_IFREG|S_IRUSR|S_IWUSR, mf);
ent = proc_create_data("cmdline", S_IRUSR|S_IWUSR, mf,
&mf_cmdline_proc_fops, (void *)(long)i);
if (!ent)
return 1;
ent->data = (void *)(long)i;
ent->read_proc = proc_mf_dump_cmdline;
ent->write_proc = proc_mf_change_cmdline;
if (i == 3) /* no vmlinux entry for 'D' */
continue;
@ -1263,19 +1286,15 @@ static int __init mf_proc_init(void)
return 1;
}
ent = create_proc_entry("side", S_IFREG|S_IRUSR|S_IWUSR, mf_proc_root);
ent = proc_create("side", S_IFREG|S_IRUSR|S_IWUSR, mf_proc_root,
&mf_side_proc_fops);
if (!ent)
return 1;
ent->data = (void *)0;
ent->read_proc = proc_mf_dump_side;
ent->write_proc = proc_mf_change_side;
ent = create_proc_entry("src", S_IFREG|S_IRUSR|S_IWUSR, mf_proc_root);
ent = proc_create("src", S_IFREG|S_IRUSR|S_IWUSR, mf_proc_root,
&mf_src_proc_fops);
if (!ent)
return 1;
ent->data = (void *)0;
ent->read_proc = proc_mf_dump_src;
ent->write_proc = proc_mf_change_src;
return 0;
}

View File

@ -116,7 +116,7 @@ static int proc_viopath_show(struct seq_file *m, void *v)
u16 vlanMap;
dma_addr_t handle;
HvLpEvent_Rc hvrc;
DECLARE_COMPLETION(done);
DECLARE_COMPLETION_ONSTACK(done);
struct device_node *node;
const char *sysid;

View File

@ -2,6 +2,8 @@ config PPC_PSERIES
depends on PPC64 && PPC_BOOK3S
bool "IBM pSeries & new (POWER5-based) iSeries"
select MPIC
select PCI_MSI
select XICS
select PPC_I8259
select PPC_RTAS
select PPC_RTAS_DAEMON

View File

@ -38,19 +38,28 @@
#include <asm/mmu.h>
#include <asm/pgalloc.h>
#include <asm/uaccess.h>
#include <linux/memory.h>
#include "plpar_wrappers.h"
#define CMM_DRIVER_VERSION "1.0.0"
#define CMM_DEFAULT_DELAY 1
#define CMM_HOTPLUG_DELAY 5
#define CMM_DEBUG 0
#define CMM_DISABLE 0
#define CMM_OOM_KB 1024
#define CMM_MIN_MEM_MB 256
#define KB2PAGES(_p) ((_p)>>(PAGE_SHIFT-10))
#define PAGES2KB(_p) ((_p)<<(PAGE_SHIFT-10))
/*
* The priority level tries to ensure that this notifier is called as
* late as possible to reduce thrashing in the shared memory pool.
*/
#define CMM_MEM_HOTPLUG_PRI 1
#define CMM_MEM_ISOLATE_PRI 15
static unsigned int delay = CMM_DEFAULT_DELAY;
static unsigned int hotplug_delay = CMM_HOTPLUG_DELAY;
static unsigned int oom_kb = CMM_OOM_KB;
static unsigned int cmm_debug = CMM_DEBUG;
static unsigned int cmm_disabled = CMM_DISABLE;
@ -65,6 +74,10 @@ MODULE_VERSION(CMM_DRIVER_VERSION);
module_param_named(delay, delay, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(delay, "Delay (in seconds) between polls to query hypervisor paging requests. "
"[Default=" __stringify(CMM_DEFAULT_DELAY) "]");
module_param_named(hotplug_delay, hotplug_delay, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(delay, "Delay (in seconds) after memory hotplug remove "
"before loaning resumes. "
"[Default=" __stringify(CMM_HOTPLUG_DELAY) "]");
module_param_named(oom_kb, oom_kb, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(oom_kb, "Amount of memory in kb to free on OOM. "
"[Default=" __stringify(CMM_OOM_KB) "]");
@ -92,6 +105,9 @@ static unsigned long oom_freed_pages;
static struct cmm_page_array *cmm_page_list;
static DEFINE_SPINLOCK(cmm_lock);
static DEFINE_MUTEX(hotplug_mutex);
static int hotplug_occurred; /* protected by the hotplug mutex */
static struct task_struct *cmm_thread_ptr;
/**
@ -110,6 +126,17 @@ static long cmm_alloc_pages(long nr)
cmm_dbg("Begin request for %ld pages\n", nr);
while (nr) {
/* Exit if a hotplug operation is in progress or occurred */
if (mutex_trylock(&hotplug_mutex)) {
if (hotplug_occurred) {
mutex_unlock(&hotplug_mutex);
break;
}
mutex_unlock(&hotplug_mutex);
} else {
break;
}
addr = __get_free_page(GFP_NOIO | __GFP_NOWARN |
__GFP_NORETRY | __GFP_NOMEMALLOC);
if (!addr)
@ -119,8 +146,9 @@ static long cmm_alloc_pages(long nr)
if (!pa || pa->index >= CMM_NR_PAGES) {
/* Need a new page for the page list. */
spin_unlock(&cmm_lock);
npa = (struct cmm_page_array *)__get_free_page(GFP_NOIO | __GFP_NOWARN |
__GFP_NORETRY | __GFP_NOMEMALLOC);
npa = (struct cmm_page_array *)__get_free_page(
GFP_NOIO | __GFP_NOWARN |
__GFP_NORETRY | __GFP_NOMEMALLOC);
if (!npa) {
pr_info("%s: Can not allocate new page list\n", __func__);
free_page(addr);
@ -282,9 +310,28 @@ static int cmm_thread(void *dummy)
while (1) {
timeleft = msleep_interruptible(delay * 1000);
if (kthread_should_stop() || timeleft) {
loaned_pages_target = loaned_pages;
if (kthread_should_stop() || timeleft)
break;
if (mutex_trylock(&hotplug_mutex)) {
if (hotplug_occurred) {
hotplug_occurred = 0;
mutex_unlock(&hotplug_mutex);
cmm_dbg("Hotplug operation has occurred, "
"loaning activity suspended "
"for %d seconds.\n",
hotplug_delay);
timeleft = msleep_interruptible(hotplug_delay *
1000);
if (kthread_should_stop() || timeleft)
break;
continue;
}
mutex_unlock(&hotplug_mutex);
} else {
cmm_dbg("Hotplug operation in progress, activity "
"suspended\n");
continue;
}
cmm_get_mpp();
@ -413,6 +460,193 @@ static struct notifier_block cmm_reboot_nb = {
.notifier_call = cmm_reboot_notifier,
};
/**
* cmm_count_pages - Count the number of pages loaned in a particular range.
*
* @arg: memory_isolate_notify structure with address range and count
*
* Return value:
* 0 on success
**/
static unsigned long cmm_count_pages(void *arg)
{
struct memory_isolate_notify *marg = arg;
struct cmm_page_array *pa;
unsigned long start = (unsigned long)pfn_to_kaddr(marg->start_pfn);
unsigned long end = start + (marg->nr_pages << PAGE_SHIFT);
unsigned long idx;
spin_lock(&cmm_lock);
pa = cmm_page_list;
while (pa) {
if ((unsigned long)pa >= start && (unsigned long)pa < end)
marg->pages_found++;
for (idx = 0; idx < pa->index; idx++)
if (pa->page[idx] >= start && pa->page[idx] < end)
marg->pages_found++;
pa = pa->next;
}
spin_unlock(&cmm_lock);
return 0;
}
/**
* cmm_memory_isolate_cb - Handle memory isolation notifier calls
* @self: notifier block struct
* @action: action to take
* @arg: struct memory_isolate_notify data for handler
*
* Return value:
* NOTIFY_OK or notifier error based on subfunction return value
**/
static int cmm_memory_isolate_cb(struct notifier_block *self,
unsigned long action, void *arg)
{
int ret = 0;
if (action == MEM_ISOLATE_COUNT)
ret = cmm_count_pages(arg);
if (ret)
ret = notifier_from_errno(ret);
else
ret = NOTIFY_OK;
return ret;
}
static struct notifier_block cmm_mem_isolate_nb = {
.notifier_call = cmm_memory_isolate_cb,
.priority = CMM_MEM_ISOLATE_PRI
};
/**
* cmm_mem_going_offline - Unloan pages where memory is to be removed
* @arg: memory_notify structure with page range to be offlined
*
* Return value:
* 0 on success
**/
static int cmm_mem_going_offline(void *arg)
{
struct memory_notify *marg = arg;
unsigned long start_page = (unsigned long)pfn_to_kaddr(marg->start_pfn);
unsigned long end_page = start_page + (marg->nr_pages << PAGE_SHIFT);
struct cmm_page_array *pa_curr, *pa_last, *npa;
unsigned long idx;
unsigned long freed = 0;
cmm_dbg("Memory going offline, searching 0x%lx (%ld pages).\n",
start_page, marg->nr_pages);
spin_lock(&cmm_lock);
/* Search the page list for pages in the range to be offlined */
pa_last = pa_curr = cmm_page_list;
while (pa_curr) {
for (idx = (pa_curr->index - 1); (idx + 1) > 0; idx--) {
if ((pa_curr->page[idx] < start_page) ||
(pa_curr->page[idx] >= end_page))
continue;
plpar_page_set_active(__pa(pa_curr->page[idx]));
free_page(pa_curr->page[idx]);
freed++;
loaned_pages--;
totalram_pages++;
pa_curr->page[idx] = pa_last->page[--pa_last->index];
if (pa_last->index == 0) {
if (pa_curr == pa_last)
pa_curr = pa_last->next;
pa_last = pa_last->next;
free_page((unsigned long)cmm_page_list);
cmm_page_list = pa_last;
continue;
}
}
pa_curr = pa_curr->next;
}
/* Search for page list structures in the range to be offlined */
pa_last = NULL;
pa_curr = cmm_page_list;
while (pa_curr) {
if (((unsigned long)pa_curr >= start_page) &&
((unsigned long)pa_curr < end_page)) {
npa = (struct cmm_page_array *)__get_free_page(
GFP_NOIO | __GFP_NOWARN |
__GFP_NORETRY | __GFP_NOMEMALLOC);
if (!npa) {
spin_unlock(&cmm_lock);
cmm_dbg("Failed to allocate memory for list "
"management. Memory hotplug "
"failed.\n");
return ENOMEM;
}
memcpy(npa, pa_curr, PAGE_SIZE);
if (pa_curr == cmm_page_list)
cmm_page_list = npa;
if (pa_last)
pa_last->next = npa;
free_page((unsigned long) pa_curr);
freed++;
pa_curr = npa;
}
pa_last = pa_curr;
pa_curr = pa_curr->next;
}
spin_unlock(&cmm_lock);
cmm_dbg("Released %ld pages in the search range.\n", freed);
return 0;
}
/**
* cmm_memory_cb - Handle memory hotplug notifier calls
* @self: notifier block struct
* @action: action to take
* @arg: struct memory_notify data for handler
*
* Return value:
* NOTIFY_OK or notifier error based on subfunction return value
*
**/
static int cmm_memory_cb(struct notifier_block *self,
unsigned long action, void *arg)
{
int ret = 0;
switch (action) {
case MEM_GOING_OFFLINE:
mutex_lock(&hotplug_mutex);
hotplug_occurred = 1;
ret = cmm_mem_going_offline(arg);
break;
case MEM_OFFLINE:
case MEM_CANCEL_OFFLINE:
mutex_unlock(&hotplug_mutex);
cmm_dbg("Memory offline operation complete.\n");
break;
case MEM_GOING_ONLINE:
case MEM_ONLINE:
case MEM_CANCEL_ONLINE:
break;
}
if (ret)
ret = notifier_from_errno(ret);
else
ret = NOTIFY_OK;
return ret;
}
static struct notifier_block cmm_mem_nb = {
.notifier_call = cmm_memory_cb,
.priority = CMM_MEM_HOTPLUG_PRI
};
/**
* cmm_init - Module initialization
*
@ -435,18 +669,24 @@ static int cmm_init(void)
if ((rc = cmm_sysfs_register(&cmm_sysdev)))
goto out_reboot_notifier;
if (register_memory_notifier(&cmm_mem_nb) ||
register_memory_isolate_notifier(&cmm_mem_isolate_nb))
goto out_unregister_notifier;
if (cmm_disabled)
return rc;
cmm_thread_ptr = kthread_run(cmm_thread, NULL, "cmmthread");
if (IS_ERR(cmm_thread_ptr)) {
rc = PTR_ERR(cmm_thread_ptr);
goto out_unregister_sysfs;
goto out_unregister_notifier;
}
return rc;
out_unregister_sysfs:
out_unregister_notifier:
unregister_memory_notifier(&cmm_mem_nb);
unregister_memory_isolate_notifier(&cmm_mem_isolate_nb);
cmm_unregister_sysfs(&cmm_sysdev);
out_reboot_notifier:
unregister_reboot_notifier(&cmm_reboot_nb);
@ -467,6 +707,8 @@ static void cmm_exit(void)
kthread_stop(cmm_thread_ptr);
unregister_oom_notifier(&cmm_oom_nb);
unregister_reboot_notifier(&cmm_reboot_nb);
unregister_memory_notifier(&cmm_mem_nb);
unregister_memory_isolate_notifier(&cmm_mem_isolate_nb);
cmm_free_pages(loaned_pages);
cmm_unregister_sysfs(&cmm_sysdev);
}

View File

@ -346,12 +346,14 @@ int dlpar_release_drc(u32 drc_index)
static DEFINE_MUTEX(pseries_cpu_hotplug_mutex);
void cpu_hotplug_driver_lock()
void cpu_hotplug_driver_lock(void)
__acquires(pseries_cpu_hotplug_mutex)
{
mutex_lock(&pseries_cpu_hotplug_mutex);
}
void cpu_hotplug_driver_unlock()
void cpu_hotplug_driver_unlock(void)
__releases(pseries_cpu_hotplug_mutex)
{
mutex_unlock(&pseries_cpu_hotplug_mutex);
}

View File

@ -144,8 +144,8 @@ static void __devinit smp_pSeries_kick_cpu(int nr)
hcpuid = get_hard_smp_processor_id(nr);
rc = plpar_hcall_norets(H_PROD, hcpuid);
if (rc != H_SUCCESS)
panic("Error: Prod to wake up processor %d Ret= %ld\n",
nr, rc);
printk(KERN_ERR "Error: Prod to wake up processor %d\
Ret= %ld\n", nr, rc);
}
}

View File

@ -143,13 +143,23 @@ static int cpm2_set_irq_type(unsigned int virq, unsigned int flow_type)
struct irq_desc *desc = irq_to_desc(virq);
unsigned int vold, vnew, edibit;
if (flow_type == IRQ_TYPE_NONE)
flow_type = IRQ_TYPE_LEVEL_LOW;
/* Port C interrupts are either IRQ_TYPE_EDGE_FALLING or
* IRQ_TYPE_EDGE_BOTH (default). All others are IRQ_TYPE_EDGE_FALLING
* or IRQ_TYPE_LEVEL_LOW (default)
*/
if (src >= CPM2_IRQ_PORTC15 && src <= CPM2_IRQ_PORTC0) {
if (flow_type == IRQ_TYPE_NONE)
flow_type = IRQ_TYPE_EDGE_BOTH;
if (flow_type & IRQ_TYPE_EDGE_RISING) {
printk(KERN_ERR "CPM2 PIC: sense type 0x%x not supported\n",
flow_type);
return -EINVAL;
if (flow_type != IRQ_TYPE_EDGE_BOTH &&
flow_type != IRQ_TYPE_EDGE_FALLING)
goto err_sense;
} else {
if (flow_type == IRQ_TYPE_NONE)
flow_type = IRQ_TYPE_LEVEL_LOW;
if (flow_type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_LEVEL_HIGH))
goto err_sense;
}
desc->status &= ~(IRQ_TYPE_SENSE_MASK | IRQ_LEVEL);
@ -181,6 +191,10 @@ static int cpm2_set_irq_type(unsigned int virq, unsigned int flow_type)
if (vold != vnew)
out_be32(&cpm2_intctl->ic_siexr, vnew);
return 0;
err_sense:
pr_err("CPM2 PIC: sense type 0x%x not supported\n", flow_type);
return -EINVAL;
}
static struct irq_chip cpm2_pic = {

View File

@ -464,8 +464,7 @@ static void __iomem *mpc83xx_pcie_remap_cfg(struct pci_bus *bus,
{
struct pci_controller *hose = pci_bus_to_host(bus);
struct mpc83xx_pcie_priv *pcie = hose->dn->data;
u8 bus_no = bus->number - hose->first_busno;
u32 dev_base = bus_no << 24 | devfn << 16;
u32 dev_base = bus->number << 24 | devfn << 16;
int ret;
ret = mpc83xx_pcie_exclude_device(bus, devfn);
@ -515,12 +514,17 @@ static int mpc83xx_pcie_read_config(struct pci_bus *bus, unsigned int devfn,
static int mpc83xx_pcie_write_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 val)
{
struct pci_controller *hose = pci_bus_to_host(bus);
void __iomem *cfg_addr;
cfg_addr = mpc83xx_pcie_remap_cfg(bus, devfn, offset);
if (!cfg_addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/* PPC_INDIRECT_TYPE_SURPRESS_PRIMARY_BUS */
if (offset == PCI_PRIMARY_BUS && bus->number == hose->first_busno)
val &= 0xffffff00;
switch (len) {
case 1:
out_8(cfg_addr, val);

View File

@ -54,6 +54,22 @@ static void mpc8xxx_gpio_save_regs(struct of_mm_gpio_chip *mm)
mpc8xxx_gc->data = in_be32(mm->regs + GPIO_DAT);
}
/* Workaround GPIO 1 errata on MPC8572/MPC8536. The status of GPIOs
* defined as output cannot be determined by reading GPDAT register,
* so we use shadow data register instead. The status of input pins
* is determined by reading GPDAT register.
*/
static int mpc8572_gpio_get(struct gpio_chip *gc, unsigned int gpio)
{
u32 val;
struct of_mm_gpio_chip *mm = to_of_mm_gpio_chip(gc);
struct mpc8xxx_gpio_chip *mpc8xxx_gc = to_mpc8xxx_gpio_chip(mm);
val = in_be32(mm->regs + GPIO_DAT) & ~in_be32(mm->regs + GPIO_DIR);
return (val | mpc8xxx_gc->data) & mpc8xxx_gpio2mask(gpio);
}
static int mpc8xxx_gpio_get(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm = to_of_mm_gpio_chip(gc);
@ -136,7 +152,10 @@ static void __init mpc8xxx_add_controller(struct device_node *np)
gc->ngpio = MPC8XXX_GPIO_PINS;
gc->direction_input = mpc8xxx_gpio_dir_in;
gc->direction_output = mpc8xxx_gpio_dir_out;
gc->get = mpc8xxx_gpio_get;
if (of_device_is_compatible(np, "fsl,mpc8572-gpio"))
gc->get = mpc8572_gpio_get;
else
gc->get = mpc8xxx_gpio_get;
gc->set = mpc8xxx_gpio_set;
ret = of_mm_gpiochip_add(np, mm_gc);

View File

@ -567,13 +567,11 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic)
#endif /* CONFIG_MPIC_U3_HT_IRQS */
#ifdef CONFIG_SMP
static int irq_choose_cpu(unsigned int virt_irq)
static int irq_choose_cpu(const cpumask_t *mask)
{
cpumask_t mask;
int cpuid;
cpumask_copy(&mask, irq_to_desc(virt_irq)->affinity);
if (cpus_equal(mask, CPU_MASK_ALL)) {
if (cpumask_equal(mask, cpu_all_mask)) {
static int irq_rover;
static DEFINE_SPINLOCK(irq_rover_lock);
unsigned long flags;
@ -594,20 +592,15 @@ static int irq_choose_cpu(unsigned int virt_irq)
spin_unlock_irqrestore(&irq_rover_lock, flags);
} else {
cpumask_t tmp;
cpus_and(tmp, cpu_online_map, mask);
if (cpus_empty(tmp))
cpuid = cpumask_first_and(mask, cpu_online_mask);
if (cpuid >= nr_cpu_ids)
goto do_round_robin;
cpuid = first_cpu(tmp);
}
return get_hard_smp_processor_id(cpuid);
}
#else
static int irq_choose_cpu(unsigned int virt_irq)
static int irq_choose_cpu(const cpumask_t *mask)
{
return hard_smp_processor_id();
}
@ -816,7 +809,7 @@ int mpic_set_affinity(unsigned int irq, const struct cpumask *cpumask)
unsigned int src = mpic_irq_to_hw(irq);
if (mpic->flags & MPIC_SINGLE_DEST_CPU) {
int cpuid = irq_choose_cpu(irq);
int cpuid = irq_choose_cpu(cpumask);
mpic_irq_write(src, MPIC_INFO(IRQ_DESTINATION), 1 << cpuid);
} else {

View File

@ -39,7 +39,12 @@ static int mpic_msi_reserve_u3_hwirqs(struct mpic *mpic)
pr_debug("mpic: found U3, guessing msi allocator setup\n");
/* Reserve source numbers we know are reserved in the HW */
/* Reserve source numbers we know are reserved in the HW.
*
* This is a bit of a mix of U3 and U4 reserves but that's going
* to work fine, we have plenty enugh numbers left so let's just
* mark anything we don't like reserved.
*/
for (i = 0; i < 8; i++)
msi_bitmap_reserve_hwirq(&mpic->msi_bitmap, i);
@ -49,6 +54,10 @@ static int mpic_msi_reserve_u3_hwirqs(struct mpic *mpic)
for (i = 100; i < 105; i++)
msi_bitmap_reserve_hwirq(&mpic->msi_bitmap, i);
for (i = 124; i < mpic->irq_count; i++)
msi_bitmap_reserve_hwirq(&mpic->msi_bitmap, i);
np = NULL;
while ((np = of_find_all_nodes(np))) {
pr_debug("mpic: mapping hwirqs for %s\n", np->full_name);

View File

@ -64,12 +64,12 @@ static u64 read_ht_magic_addr(struct pci_dev *pdev, unsigned int pos)
return addr;
}
static u64 find_ht_magic_addr(struct pci_dev *pdev)
static u64 find_ht_magic_addr(struct pci_dev *pdev, unsigned int hwirq)
{
struct pci_bus *bus;
unsigned int pos;
for (bus = pdev->bus; bus; bus = bus->parent) {
for (bus = pdev->bus; bus && bus->self; bus = bus->parent) {
pos = pci_find_ht_capability(bus->self, HT_CAPTYPE_MSI_MAPPING);
if (pos)
return read_ht_magic_addr(bus->self, pos);
@ -78,13 +78,41 @@ static u64 find_ht_magic_addr(struct pci_dev *pdev)
return 0;
}
static u64 find_u4_magic_addr(struct pci_dev *pdev, unsigned int hwirq)
{
struct pci_controller *hose = pci_bus_to_host(pdev->bus);
/* U4 PCIe MSIs need to write to the special register in
* the bridge that generates interrupts. There should be
* theorically a register at 0xf8005000 where you just write
* the MSI number and that triggers the right interrupt, but
* unfortunately, this is busted in HW, the bridge endian swaps
* the value and hits the wrong nibble in the register.
*
* So instead we use another register set which is used normally
* for converting HT interrupts to MPIC interrupts, which decodes
* the interrupt number as part of the low address bits
*
* This will not work if we ever use more than one legacy MSI in
* a block but we never do. For one MSI or multiple MSI-X where
* each interrupt address can be specified separately, it works
* just fine.
*/
if (of_device_is_compatible(hose->dn, "u4-pcie") ||
of_device_is_compatible(hose->dn, "U4-pcie"))
return 0xf8004000 | (hwirq << 4);
return 0;
}
static int u3msi_msi_check_device(struct pci_dev *pdev, int nvec, int type)
{
if (type == PCI_CAP_ID_MSIX)
pr_debug("u3msi: MSI-X untested, trying anyway.\n");
/* If we can't find a magic address then MSI ain't gonna work */
if (find_ht_magic_addr(pdev) == 0) {
if (find_ht_magic_addr(pdev, 0) == 0 &&
find_u4_magic_addr(pdev, 0) == 0) {
pr_debug("u3msi: no magic address found for %s\n",
pci_name(pdev));
return -ENXIO;
@ -118,10 +146,6 @@ static int u3msi_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
u64 addr;
int hwirq;
addr = find_ht_magic_addr(pdev);
msg.address_lo = addr & 0xFFFFFFFF;
msg.address_hi = addr >> 32;
list_for_each_entry(entry, &pdev->msi_list, list) {
hwirq = msi_bitmap_alloc_hwirqs(&msi_mpic->msi_bitmap, 1);
if (hwirq < 0) {
@ -129,6 +153,12 @@ static int u3msi_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
return hwirq;
}
addr = find_ht_magic_addr(pdev, hwirq);
if (addr == 0)
addr = find_u4_magic_addr(pdev, hwirq);
msg.address_lo = addr & 0xFFFFFFFF;
msg.address_hi = addr >> 32;
virq = irq_create_mapping(msi_mpic->irqhost, hwirq);
if (virq == NO_IRQ) {
pr_debug("u3msi: failed mapping hwirq 0x%x\n", hwirq);
@ -143,6 +173,8 @@ static int u3msi_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
pr_debug("u3msi: allocated virq 0x%x (hw 0x%x) addr 0x%lx\n",
virq, hwirq, (unsigned long)addr);
printk("u3msi: allocated virq 0x%x (hw 0x%x) addr 0x%lx\n",
virq, hwirq, (unsigned long)addr);
msg.data = hwirq;
write_msi_msg(virq, &msg);

View File

@ -128,8 +128,6 @@ static struct platform_device nor_flash_device = {
/* SH Eth */
#define SH_ETH_ADDR (0xA4600000)
#define SH_ETH_MAHR (SH_ETH_ADDR + 0x1C0)
#define SH_ETH_MALR (SH_ETH_ADDR + 0x1C8)
static struct resource sh_eth_resources[] = {
[0] = {
.start = SH_ETH_ADDR,
@ -509,6 +507,7 @@ static struct platform_device sdhi1_device = {
#else
/* MMC SPI */
static int mmc_spi_get_ro(struct device *dev)
{
return gpio_get_value(GPIO_PTY6);
@ -542,6 +541,7 @@ static struct spi_board_info spi_bus[] = {
},
};
/* MSIOF0 */
static struct sh_msiof_spi_info msiof0_data = {
.num_chipselect = 1,
};

View File

@ -133,12 +133,8 @@ void __update_cache(struct vm_area_struct *vma,
page = pfn_to_page(pfn);
if (pfn_valid(pfn)) {
int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags);
if (dirty) {
unsigned long addr = (unsigned long)page_address(page);
if (pages_do_alias(addr, address & PAGE_MASK))
__flush_purge_region((void *)addr, PAGE_SIZE);
}
if (dirty)
__flush_purge_region(page_address(page), PAGE_SIZE);
}
}

View File

@ -394,7 +394,7 @@ static enum ucode_state microcode_update_cpu(int cpu)
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
enum ucode_state ustate;
if (uci->valid && uci->mc)
if (uci->valid)
ustate = microcode_resume_cpu(cpu);
else
ustate = microcode_init_cpu(cpu);

View File

@ -1557,6 +1557,25 @@ static unsigned short atapi_io_port[] = {
P_ATAPI_DMARQ,
P_ATAPI_INTRQ,
P_ATAPI_IORDY,
P_ATAPI_D0A,
P_ATAPI_D1A,
P_ATAPI_D2A,
P_ATAPI_D3A,
P_ATAPI_D4A,
P_ATAPI_D5A,
P_ATAPI_D6A,
P_ATAPI_D7A,
P_ATAPI_D8A,
P_ATAPI_D9A,
P_ATAPI_D10A,
P_ATAPI_D11A,
P_ATAPI_D12A,
P_ATAPI_D13A,
P_ATAPI_D14A,
P_ATAPI_D15A,
P_ATAPI_A0A,
P_ATAPI_A1A,
P_ATAPI_A2A,
0
};

View File

@ -31,7 +31,7 @@
#include <linux/libata.h>
#define DRV_NAME "pata_cmd64x"
#define DRV_VERSION "0.3.1"
#define DRV_VERSION "0.2.5"
/*
* CMD64x specific registers definition.
@ -219,7 +219,7 @@ static void cmd64x_set_dmamode(struct ata_port *ap, struct ata_device *adev)
regU |= udma_data[adev->dma_mode - XFER_UDMA_0] << shift;
/* Merge the control bits */
regU |= 1 << adev->devno; /* UDMA on */
if (adev->dma_mode > 2) /* 15nS timing */
if (adev->dma_mode > XFER_UDMA_2) /* 15nS timing */
regU |= 4 << adev->devno;
} else {
regU &= ~ (1 << adev->devno); /* UDMA off */
@ -254,109 +254,17 @@ static void cmd648_bmdma_stop(struct ata_queued_cmd *qc)
}
/**
* cmd64x_bmdma_stop - DMA stop callback
* cmd646r1_dma_stop - DMA stop callback
* @qc: Command in progress
*
* Track the completion of live DMA commands and clear the
* host->private_data DMA tracking flag as we do.
* Stub for now while investigating the r1 quirk in the old driver.
*/
static void cmd64x_bmdma_stop(struct ata_queued_cmd *qc)
static void cmd646r1_bmdma_stop(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
ata_bmdma_stop(qc);
WARN_ON(ap->host->private_data != ap);
ap->host->private_data = NULL;
}
/**
* cmd64x_qc_defer - Defer logic for chip limits
* @qc: queued command
*
* Decide whether we can issue the command. Called under the host lock.
*/
static int cmd64x_qc_defer(struct ata_queued_cmd *qc)
{
struct ata_host *host = qc->ap->host;
struct ata_port *alt = host->ports[1 ^ qc->ap->port_no];
int rc;
int dma = 0;
/* Apply the ATA rules first */
rc = ata_std_qc_defer(qc);
if (rc)
return rc;
if (qc->tf.protocol == ATAPI_PROT_DMA ||
qc->tf.protocol == ATA_PROT_DMA)
dma = 1;
/* If the other port is not live then issue the command */
if (alt == NULL || !alt->qc_active) {
if (dma)
host->private_data = qc->ap;
return 0;
}
/* If there is a live DMA command then wait */
if (host->private_data != NULL)
return ATA_DEFER_PORT;
if (dma)
/* Cannot overlap our DMA command */
return ATA_DEFER_PORT;
return 0;
}
/**
* cmd64x_interrupt - ATA host interrupt handler
* @irq: irq line (unused)
* @dev_instance: pointer to our ata_host information structure
*
* Our interrupt handler for PCI IDE devices. Calls
* ata_sff_host_intr() for each port that is flagging an IRQ. We cannot
* use the defaults as we need to avoid touching status/altstatus during
* a DMA.
*
* LOCKING:
* Obtains host lock during operation.
*
* RETURNS:
* IRQ_NONE or IRQ_HANDLED.
*/
irqreturn_t cmd64x_interrupt(int irq, void *dev_instance)
{
struct ata_host *host = dev_instance;
struct pci_dev *pdev = to_pci_dev(host->dev);
unsigned int i;
unsigned int handled = 0;
unsigned long flags;
static const u8 irq_reg[2] = { CFR, ARTTIM23 };
static const u8 irq_mask[2] = { 1 << 2, 1 << 4 };
/* TODO: make _irqsave conditional on x86 PCI IDE legacy mode */
spin_lock_irqsave(&host->lock, flags);
for (i = 0; i < host->n_ports; i++) {
struct ata_port *ap;
u8 reg;
pci_read_config_byte(pdev, irq_reg[i], &reg);
ap = host->ports[i];
if (ap && (reg & irq_mask[i]) &&
!(ap->flags & ATA_FLAG_DISABLED)) {
struct ata_queued_cmd *qc;
qc = ata_qc_from_tag(ap, ap->link.active_tag);
if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING)) &&
(qc->flags & ATA_QCFLAG_ACTIVE))
handled |= ata_sff_host_intr(ap, qc);
}
}
spin_unlock_irqrestore(&host->lock, flags);
return IRQ_RETVAL(handled);
}
static struct scsi_host_template cmd64x_sht = {
ATA_BMDMA_SHT(DRV_NAME),
};
@ -365,8 +273,6 @@ static const struct ata_port_operations cmd64x_base_ops = {
.inherits = &ata_bmdma_port_ops,
.set_piomode = cmd64x_set_piomode,
.set_dmamode = cmd64x_set_dmamode,
.bmdma_stop = cmd64x_bmdma_stop,
.qc_defer = cmd64x_qc_defer,
};
static struct ata_port_operations cmd64x_port_ops = {
@ -376,6 +282,7 @@ static struct ata_port_operations cmd64x_port_ops = {
static struct ata_port_operations cmd646r1_port_ops = {
.inherits = &cmd64x_base_ops,
.bmdma_stop = cmd646r1_bmdma_stop,
.cable_detect = ata_cable_40wire,
};
@ -383,7 +290,6 @@ static struct ata_port_operations cmd648_port_ops = {
.inherits = &cmd64x_base_ops,
.bmdma_stop = cmd648_bmdma_stop,
.cable_detect = cmd648_cable_detect,
.qc_defer = ata_std_qc_defer
};
static int cmd64x_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
@ -432,7 +338,6 @@ static int cmd64x_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
const struct ata_port_info *ppi[] = { &cmd_info[id->driver_data], NULL };
u8 mrdmode;
int rc;
struct ata_host *host;
rc = pcim_enable_device(pdev);
if (rc)
@ -450,25 +355,20 @@ static int cmd64x_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
ppi[0] = &cmd_info[3];
}
pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 64);
pci_read_config_byte(pdev, MRDMODE, &mrdmode);
mrdmode &= ~ 0x30; /* IRQ set up */
mrdmode |= 0x02; /* Memory read line enable */
pci_write_config_byte(pdev, MRDMODE, mrdmode);
/* Force PIO 0 here.. */
/* PPC specific fixup copied from old driver */
#ifdef CONFIG_PPC
pci_write_config_byte(pdev, UDIDETCR0, 0xF0);
#endif
rc = ata_pci_sff_prepare_host(pdev, ppi, &host);
if (rc)
return rc;
/* We use this pointer to track the AP which has DMA running */
host->private_data = NULL;
pci_set_master(pdev);
return ata_pci_sff_activate_host(host, cmd64x_interrupt, &cmd64x_sht);
return ata_pci_sff_init_one(pdev, ppi, &cmd64x_sht, NULL);
}
#ifdef CONFIG_PM

View File

@ -703,9 +703,9 @@ int bus_add_driver(struct device_driver *drv)
return 0;
out_unregister:
kobject_put(&priv->kobj);
kfree(drv->p);
drv->p = NULL;
kobject_put(&priv->kobj);
out_put_bus:
bus_put(bus);
return error;

View File

@ -446,7 +446,8 @@ struct kset *devices_kset;
* @dev: device.
* @attr: device attribute descriptor.
*/
int device_create_file(struct device *dev, struct device_attribute *attr)
int device_create_file(struct device *dev,
const struct device_attribute *attr)
{
int error = 0;
if (dev)
@ -459,7 +460,8 @@ int device_create_file(struct device *dev, struct device_attribute *attr)
* @dev: device.
* @attr: device attribute descriptor.
*/
void device_remove_file(struct device *dev, struct device_attribute *attr)
void device_remove_file(struct device *dev,
const struct device_attribute *attr)
{
if (dev)
sysfs_remove_file(&dev->kobj, &attr->attr);
@ -470,7 +472,8 @@ void device_remove_file(struct device *dev, struct device_attribute *attr)
* @dev: device.
* @attr: device binary attribute descriptor.
*/
int device_create_bin_file(struct device *dev, struct bin_attribute *attr)
int device_create_bin_file(struct device *dev,
const struct bin_attribute *attr)
{
int error = -EINVAL;
if (dev)
@ -484,7 +487,8 @@ EXPORT_SYMBOL_GPL(device_create_bin_file);
* @dev: device.
* @attr: device binary attribute descriptor.
*/
void device_remove_bin_file(struct device *dev, struct bin_attribute *attr)
void device_remove_bin_file(struct device *dev,
const struct bin_attribute *attr)
{
if (dev)
sysfs_remove_bin_file(&dev->kobj, attr);
@ -905,8 +909,10 @@ int device_add(struct device *dev)
dev->init_name = NULL;
}
if (!dev_name(dev))
if (!dev_name(dev)) {
error = -EINVAL;
goto name_error;
}
pr_debug("device: '%s': %s\n", dev_name(dev), __func__);

View File

@ -32,7 +32,7 @@ static int dev_mount = 1;
static int dev_mount;
#endif
static rwlock_t dirlock;
static DEFINE_MUTEX(dirlock);
static int __init mount_param(char *str)
{
@ -93,7 +93,7 @@ static int create_path(const char *nodepath)
{
int err;
read_lock(&dirlock);
mutex_lock(&dirlock);
err = dev_mkdir(nodepath, 0755);
if (err == -ENOENT) {
char *path;
@ -101,8 +101,10 @@ static int create_path(const char *nodepath)
/* parent directories do not exist, create them */
path = kstrdup(nodepath, GFP_KERNEL);
if (!path)
return -ENOMEM;
if (!path) {
err = -ENOMEM;
goto out;
}
s = path;
for (;;) {
s = strchr(s, '/');
@ -117,7 +119,8 @@ static int create_path(const char *nodepath)
}
kfree(path);
}
read_unlock(&dirlock);
out:
mutex_unlock(&dirlock);
return err;
}
@ -229,7 +232,7 @@ static int delete_path(const char *nodepath)
if (!path)
return -ENOMEM;
write_lock(&dirlock);
mutex_lock(&dirlock);
for (;;) {
char *base;
@ -241,7 +244,7 @@ static int delete_path(const char *nodepath)
if (err)
break;
}
write_unlock(&dirlock);
mutex_unlock(&dirlock);
kfree(path);
return err;
@ -352,8 +355,6 @@ int __init devtmpfs_init(void)
int err;
struct vfsmount *mnt;
rwlock_init(&dirlock);
err = register_filesystem(&dev_fs_type);
if (err) {
printk(KERN_ERR "devtmpfs: unable to register devtmpfs "

View File

@ -98,7 +98,7 @@ EXPORT_SYMBOL_GPL(driver_find_device);
* @attr: driver attribute descriptor.
*/
int driver_create_file(struct device_driver *drv,
struct driver_attribute *attr)
const struct driver_attribute *attr)
{
int error;
if (drv)
@ -115,7 +115,7 @@ EXPORT_SYMBOL_GPL(driver_create_file);
* @attr: driver attribute descriptor.
*/
void driver_remove_file(struct device_driver *drv,
struct driver_attribute *attr)
const struct driver_attribute *attr)
{
if (drv)
sysfs_remove_file(&drv->p->kobj, &attr->attr);

View File

@ -63,6 +63,20 @@ void unregister_memory_notifier(struct notifier_block *nb)
}
EXPORT_SYMBOL(unregister_memory_notifier);
static ATOMIC_NOTIFIER_HEAD(memory_isolate_chain);
int register_memory_isolate_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_register(&memory_isolate_chain, nb);
}
EXPORT_SYMBOL(register_memory_isolate_notifier);
void unregister_memory_isolate_notifier(struct notifier_block *nb)
{
atomic_notifier_chain_unregister(&memory_isolate_chain, nb);
}
EXPORT_SYMBOL(unregister_memory_isolate_notifier);
/*
* register_memory - Setup a sysfs device for a memory block
*/
@ -157,6 +171,11 @@ int memory_notify(unsigned long val, void *v)
return blocking_notifier_call_chain(&memory_chain, val, v);
}
int memory_isolate_notify(unsigned long val, void *v)
{
return atomic_notifier_call_chain(&memory_isolate_chain, val, v);
}
/*
* MEMORY_HOTPLUG depends on SPARSEMEM in mm/Kconfig, so it is
* OK to have direct references to sparsemem variables in here.

View File

@ -441,6 +441,7 @@ struct platform_device *platform_device_register_data(
platform_device_put(pdev);
return ERR_PTR(retval);
}
EXPORT_SYMBOL_GPL(platform_device_register_data);
static int platform_drv_probe(struct device *_dev)
{

View File

@ -161,6 +161,32 @@ void device_pm_move_last(struct device *dev)
list_move_tail(&dev->power.entry, &dpm_list);
}
static ktime_t initcall_debug_start(struct device *dev)
{
ktime_t calltime = ktime_set(0, 0);
if (initcall_debug) {
pr_info("calling %s+ @ %i\n",
dev_name(dev), task_pid_nr(current));
calltime = ktime_get();
}
return calltime;
}
static void initcall_debug_report(struct device *dev, ktime_t calltime,
int error)
{
ktime_t delta, rettime;
if (initcall_debug) {
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
pr_info("call %s+ returned %d after %Ld usecs\n", dev_name(dev),
error, (unsigned long long)ktime_to_ns(delta) >> 10);
}
}
/**
* pm_op - Execute the PM operation appropriate for given PM event.
* @dev: Device to handle.
@ -172,13 +198,9 @@ static int pm_op(struct device *dev,
pm_message_t state)
{
int error = 0;
ktime_t calltime, delta, rettime;
ktime_t calltime;
if (initcall_debug) {
pr_info("calling %s+ @ %i\n",
dev_name(dev), task_pid_nr(current));
calltime = ktime_get();
}
calltime = initcall_debug_start(dev);
switch (state.event) {
#ifdef CONFIG_SUSPEND
@ -227,12 +249,7 @@ static int pm_op(struct device *dev,
error = -EINVAL;
}
if (initcall_debug) {
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
pr_info("call %s+ returned %d after %Ld usecs\n", dev_name(dev),
error, (unsigned long long)ktime_to_ns(delta) >> 10);
}
initcall_debug_report(dev, calltime, error);
return error;
}
@ -309,8 +326,9 @@ static int pm_noirq_op(struct device *dev,
if (initcall_debug) {
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
printk("initcall %s_i+ returned %d after %Ld usecs\n", dev_name(dev),
error, (unsigned long long)ktime_to_ns(delta) >> 10);
printk("initcall %s_i+ returned %d after %Ld usecs\n",
dev_name(dev), error,
(unsigned long long)ktime_to_ns(delta) >> 10);
}
return error;
@ -354,6 +372,23 @@ static void pm_dev_err(struct device *dev, pm_message_t state, char *info,
kobject_name(&dev->kobj), pm_verb(state.event), info, error);
}
static void dpm_show_time(ktime_t starttime, pm_message_t state, char *info)
{
ktime_t calltime;
s64 usecs64;
int usecs;
calltime = ktime_get();
usecs64 = ktime_to_ns(ktime_sub(calltime, starttime));
do_div(usecs64, NSEC_PER_USEC);
usecs = usecs64;
if (usecs == 0)
usecs = 1;
pr_info("PM: %s%s%s of devices complete after %ld.%03ld msecs\n",
info ?: "", info ? " " : "", pm_verb(state.event),
usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC);
}
/*------------------------- Resume routines -------------------------*/
/**
@ -390,6 +425,7 @@ static int device_resume_noirq(struct device *dev, pm_message_t state)
void dpm_resume_noirq(pm_message_t state)
{
struct device *dev;
ktime_t starttime = ktime_get();
mutex_lock(&dpm_list_mtx);
transition_started = false;
@ -403,10 +439,31 @@ void dpm_resume_noirq(pm_message_t state)
pm_dev_err(dev, state, " early", error);
}
mutex_unlock(&dpm_list_mtx);
dpm_show_time(starttime, state, "early");
resume_device_irqs();
}
EXPORT_SYMBOL_GPL(dpm_resume_noirq);
/**
* legacy_resume - Execute a legacy (bus or class) resume callback for device.
* dev: Device to resume.
* cb: Resume callback to execute.
*/
static int legacy_resume(struct device *dev, int (*cb)(struct device *dev))
{
int error;
ktime_t calltime;
calltime = initcall_debug_start(dev);
error = cb(dev);
suspend_report_result(cb, error);
initcall_debug_report(dev, calltime, error);
return error;
}
/**
* device_resume - Execute "resume" callbacks for given device.
* @dev: Device to handle.
@ -427,7 +484,7 @@ static int device_resume(struct device *dev, pm_message_t state)
error = pm_op(dev, dev->bus->pm, state);
} else if (dev->bus->resume) {
pm_dev_dbg(dev, state, "legacy ");
error = dev->bus->resume(dev);
error = legacy_resume(dev, dev->bus->resume);
}
if (error)
goto End;
@ -448,7 +505,7 @@ static int device_resume(struct device *dev, pm_message_t state)
error = pm_op(dev, dev->class->pm, state);
} else if (dev->class->resume) {
pm_dev_dbg(dev, state, "legacy class ");
error = dev->class->resume(dev);
error = legacy_resume(dev, dev->class->resume);
}
}
End:
@ -468,6 +525,7 @@ static int device_resume(struct device *dev, pm_message_t state)
static void dpm_resume(pm_message_t state)
{
struct list_head list;
ktime_t starttime = ktime_get();
INIT_LIST_HEAD(&list);
mutex_lock(&dpm_list_mtx);
@ -496,6 +554,7 @@ static void dpm_resume(pm_message_t state)
}
list_splice(&list, &dpm_list);
mutex_unlock(&dpm_list_mtx);
dpm_show_time(starttime, state, NULL);
}
/**
@ -548,7 +607,7 @@ static void dpm_complete(pm_message_t state)
mutex_unlock(&dpm_list_mtx);
device_complete(dev, state);
pm_runtime_put_noidle(dev);
pm_runtime_put_sync(dev);
mutex_lock(&dpm_list_mtx);
}
@ -628,6 +687,7 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state)
int dpm_suspend_noirq(pm_message_t state)
{
struct device *dev;
ktime_t starttime = ktime_get();
int error = 0;
suspend_device_irqs();
@ -643,10 +703,33 @@ int dpm_suspend_noirq(pm_message_t state)
mutex_unlock(&dpm_list_mtx);
if (error)
dpm_resume_noirq(resume_event(state));
else
dpm_show_time(starttime, state, "late");
return error;
}
EXPORT_SYMBOL_GPL(dpm_suspend_noirq);
/**
* legacy_suspend - Execute a legacy (bus or class) suspend callback for device.
* dev: Device to suspend.
* cb: Suspend callback to execute.
*/
static int legacy_suspend(struct device *dev, pm_message_t state,
int (*cb)(struct device *dev, pm_message_t state))
{
int error;
ktime_t calltime;
calltime = initcall_debug_start(dev);
error = cb(dev, state);
suspend_report_result(cb, error);
initcall_debug_report(dev, calltime, error);
return error;
}
/**
* device_suspend - Execute "suspend" callbacks for given device.
* @dev: Device to handle.
@ -664,8 +747,7 @@ static int device_suspend(struct device *dev, pm_message_t state)
error = pm_op(dev, dev->class->pm, state);
} else if (dev->class->suspend) {
pm_dev_dbg(dev, state, "legacy class ");
error = dev->class->suspend(dev, state);
suspend_report_result(dev->class->suspend, error);
error = legacy_suspend(dev, state, dev->class->suspend);
}
if (error)
goto End;
@ -686,8 +768,7 @@ static int device_suspend(struct device *dev, pm_message_t state)
error = pm_op(dev, dev->bus->pm, state);
} else if (dev->bus->suspend) {
pm_dev_dbg(dev, state, "legacy ");
error = dev->bus->suspend(dev, state);
suspend_report_result(dev->bus->suspend, error);
error = legacy_suspend(dev, state, dev->bus->suspend);
}
}
End:
@ -703,6 +784,7 @@ static int device_suspend(struct device *dev, pm_message_t state)
static int dpm_suspend(pm_message_t state)
{
struct list_head list;
ktime_t starttime = ktime_get();
int error = 0;
INIT_LIST_HEAD(&list);
@ -728,6 +810,8 @@ static int dpm_suspend(pm_message_t state)
}
list_splice(&list, dpm_list.prev);
mutex_unlock(&dpm_list_mtx);
if (!error)
dpm_show_time(starttime, state, NULL);
return error;
}
@ -796,7 +880,7 @@ static int dpm_prepare(pm_message_t state)
pm_runtime_get_noresume(dev);
if (pm_runtime_barrier(dev) && device_may_wakeup(dev)) {
/* Wake-up requested during system sleep transition. */
pm_runtime_put_noidle(dev);
pm_runtime_put_sync(dev);
error = -EBUSY;
} else {
error = device_prepare(dev, state);

View File

@ -84,6 +84,19 @@ static int __pm_runtime_idle(struct device *dev)
dev->bus->pm->runtime_idle(dev);
spin_lock_irq(&dev->power.lock);
} else if (dev->type && dev->type->pm && dev->type->pm->runtime_idle) {
spin_unlock_irq(&dev->power.lock);
dev->type->pm->runtime_idle(dev);
spin_lock_irq(&dev->power.lock);
} else if (dev->class && dev->class->pm
&& dev->class->pm->runtime_idle) {
spin_unlock_irq(&dev->power.lock);
dev->class->pm->runtime_idle(dev);
spin_lock_irq(&dev->power.lock);
}
@ -192,6 +205,22 @@ int __pm_runtime_suspend(struct device *dev, bool from_wq)
retval = dev->bus->pm->runtime_suspend(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else if (dev->type && dev->type->pm
&& dev->type->pm->runtime_suspend) {
spin_unlock_irq(&dev->power.lock);
retval = dev->type->pm->runtime_suspend(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else if (dev->class && dev->class->pm
&& dev->class->pm->runtime_suspend) {
spin_unlock_irq(&dev->power.lock);
retval = dev->class->pm->runtime_suspend(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else {
@ -357,6 +386,22 @@ int __pm_runtime_resume(struct device *dev, bool from_wq)
retval = dev->bus->pm->runtime_resume(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else if (dev->type && dev->type->pm
&& dev->type->pm->runtime_resume) {
spin_unlock_irq(&dev->power.lock);
retval = dev->type->pm->runtime_resume(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else if (dev->class && dev->class->pm
&& dev->class->pm->runtime_resume) {
spin_unlock_irq(&dev->power.lock);
retval = dev->class->pm->runtime_resume(dev);
spin_lock_irq(&dev->power.lock);
dev->power.runtime_error = retval;
} else {

View File

@ -307,6 +307,7 @@ static void btusb_bulk_complete(struct urb *urb)
return;
usb_anchor_urb(urb, &data->bulk_anchor);
usb_mark_last_busy(data->udev);
err = usb_submit_urb(urb, GFP_ATOMIC);
if (err < 0) {

View File

@ -358,7 +358,7 @@ struct port {
u8 update_flow_control;
struct ctrl_ul ctrl_ul;
struct ctrl_dl ctrl_dl;
struct kfifo *fifo_ul;
struct kfifo fifo_ul;
void __iomem *dl_addr[2];
u32 dl_size[2];
u8 toggle_dl;
@ -685,8 +685,6 @@ static int nozomi_read_config_table(struct nozomi *dc)
dump_table(dc);
for (i = PORT_MDM; i < MAX_PORT; i++) {
dc->port[i].fifo_ul =
kfifo_alloc(FIFO_BUFFER_SIZE_UL, GFP_ATOMIC, NULL);
memset(&dc->port[i].ctrl_dl, 0, sizeof(struct ctrl_dl));
memset(&dc->port[i].ctrl_ul, 0, sizeof(struct ctrl_ul));
}
@ -798,7 +796,7 @@ static int send_data(enum port_type index, struct nozomi *dc)
struct tty_struct *tty = tty_port_tty_get(&port->port);
/* Get data from tty and place in buf for now */
size = __kfifo_get(port->fifo_ul, dc->send_buf,
size = kfifo_out(&port->fifo_ul, dc->send_buf,
ul_size < SEND_BUF_MAX ? ul_size : SEND_BUF_MAX);
if (size == 0) {
@ -988,11 +986,11 @@ static int receive_flow_control(struct nozomi *dc)
} else if (old_ctrl.CTS == 0 && ctrl_dl.CTS == 1) {
if (__kfifo_len(dc->port[port].fifo_ul)) {
if (kfifo_len(&dc->port[port].fifo_ul)) {
DBG1("Enable interrupt (0x%04X) on port: %d",
enable_ier, port);
DBG1("Data in buffer [%d], enable transmit! ",
__kfifo_len(dc->port[port].fifo_ul));
kfifo_len(&dc->port[port].fifo_ul));
enable_transmit_ul(port, dc);
} else {
DBG1("No data in buffer...");
@ -1433,6 +1431,16 @@ static int __devinit nozomi_card_init(struct pci_dev *pdev,
goto err_free_sbuf;
}
for (i = PORT_MDM; i < MAX_PORT; i++) {
if (kfifo_alloc(&dc->port[i].fifo_ul,
FIFO_BUFFER_SIZE_UL, GFP_ATOMIC)) {
dev_err(&pdev->dev,
"Could not allocate kfifo buffer\n");
ret = -ENOMEM;
goto err_free_kfifo;
}
}
spin_lock_init(&dc->spin_mutex);
nozomi_setup_private_data(dc);
@ -1445,7 +1453,7 @@ static int __devinit nozomi_card_init(struct pci_dev *pdev,
NOZOMI_NAME, dc);
if (unlikely(ret)) {
dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
goto err_free_sbuf;
goto err_free_kfifo;
}
DBG1("base_addr: %p", dc->base_addr);
@ -1464,13 +1472,28 @@ static int __devinit nozomi_card_init(struct pci_dev *pdev,
dc->state = NOZOMI_STATE_ENABLED;
for (i = 0; i < MAX_PORT; i++) {
struct device *tty_dev;
mutex_init(&dc->port[i].tty_sem);
tty_port_init(&dc->port[i].port);
tty_register_device(ntty_driver, dc->index_start + i,
tty_dev = tty_register_device(ntty_driver, dc->index_start + i,
&pdev->dev);
if (IS_ERR(tty_dev)) {
ret = PTR_ERR(tty_dev);
dev_err(&pdev->dev, "Could not allocate tty?\n");
goto err_free_tty;
}
}
return 0;
err_free_tty:
for (i = dc->index_start; i < dc->index_start + MAX_PORT; ++i)
tty_unregister_device(ntty_driver, i);
err_free_kfifo:
for (i = 0; i < MAX_PORT; i++)
kfifo_free(&dc->port[i].fifo_ul);
err_free_sbuf:
kfree(dc->send_buf);
iounmap(dc->base_addr);
@ -1536,8 +1559,7 @@ static void __devexit nozomi_card_exit(struct pci_dev *pdev)
free_irq(pdev->irq, dc);
for (i = 0; i < MAX_PORT; i++)
if (dc->port[i].fifo_ul)
kfifo_free(dc->port[i].fifo_ul);
kfifo_free(&dc->port[i].fifo_ul);
kfree(dc->send_buf);
@ -1673,7 +1695,7 @@ static int ntty_write(struct tty_struct *tty, const unsigned char *buffer,
goto exit;
}
rval = __kfifo_put(port->fifo_ul, (unsigned char *)buffer, count);
rval = kfifo_in(&port->fifo_ul, (unsigned char *)buffer, count);
/* notify card */
if (unlikely(dc == NULL)) {
@ -1721,7 +1743,7 @@ static int ntty_write_room(struct tty_struct *tty)
if (!port->port.count)
goto exit;
room = port->fifo_ul->size - __kfifo_len(port->fifo_ul);
room = port->fifo_ul.size - kfifo_len(&port->fifo_ul);
exit:
mutex_unlock(&port->tty_sem);
@ -1878,7 +1900,7 @@ static s32 ntty_chars_in_buffer(struct tty_struct *tty)
goto exit_in_buffer;
}
rval = __kfifo_len(port->fifo_ul);
rval = kfifo_len(&port->fifo_ul);
exit_in_buffer:
return rval;

View File

@ -487,7 +487,7 @@ static struct sonypi_device {
int camera_power;
int bluetooth_power;
struct mutex lock;
struct kfifo *fifo;
struct kfifo fifo;
spinlock_t fifo_lock;
wait_queue_head_t fifo_proc_list;
struct fasync_struct *fifo_async;
@ -496,7 +496,7 @@ static struct sonypi_device {
struct input_dev *input_jog_dev;
struct input_dev *input_key_dev;
struct work_struct input_work;
struct kfifo *input_fifo;
struct kfifo input_fifo;
spinlock_t input_fifo_lock;
} sonypi_device;
@ -777,8 +777,9 @@ static void input_keyrelease(struct work_struct *work)
{
struct sonypi_keypress kp;
while (kfifo_get(sonypi_device.input_fifo, (unsigned char *)&kp,
sizeof(kp)) == sizeof(kp)) {
while (kfifo_out_locked(&sonypi_device.input_fifo, (unsigned char *)&kp,
sizeof(kp), &sonypi_device.input_fifo_lock)
== sizeof(kp)) {
msleep(10);
input_report_key(kp.dev, kp.key, 0);
input_sync(kp.dev);
@ -827,8 +828,9 @@ static void sonypi_report_input_event(u8 event)
if (kp.dev) {
input_report_key(kp.dev, kp.key, 1);
input_sync(kp.dev);
kfifo_put(sonypi_device.input_fifo,
(unsigned char *)&kp, sizeof(kp));
kfifo_in_locked(&sonypi_device.input_fifo,
(unsigned char *)&kp, sizeof(kp),
&sonypi_device.input_fifo_lock);
schedule_work(&sonypi_device.input_work);
}
}
@ -880,7 +882,8 @@ static irqreturn_t sonypi_irq(int irq, void *dev_id)
acpi_bus_generate_proc_event(sonypi_acpi_device, 1, event);
#endif
kfifo_put(sonypi_device.fifo, (unsigned char *)&event, sizeof(event));
kfifo_in_locked(&sonypi_device.fifo, (unsigned char *)&event,
sizeof(event), &sonypi_device.fifo_lock);
kill_fasync(&sonypi_device.fifo_async, SIGIO, POLL_IN);
wake_up_interruptible(&sonypi_device.fifo_proc_list);
@ -906,7 +909,7 @@ static int sonypi_misc_open(struct inode *inode, struct file *file)
mutex_lock(&sonypi_device.lock);
/* Flush input queue on first open */
if (!sonypi_device.open_count)
kfifo_reset(sonypi_device.fifo);
kfifo_reset(&sonypi_device.fifo);
sonypi_device.open_count++;
mutex_unlock(&sonypi_device.lock);
unlock_kernel();
@ -919,17 +922,18 @@ static ssize_t sonypi_misc_read(struct file *file, char __user *buf,
ssize_t ret;
unsigned char c;
if ((kfifo_len(sonypi_device.fifo) == 0) &&
if ((kfifo_len(&sonypi_device.fifo) == 0) &&
(file->f_flags & O_NONBLOCK))
return -EAGAIN;
ret = wait_event_interruptible(sonypi_device.fifo_proc_list,
kfifo_len(sonypi_device.fifo) != 0);
kfifo_len(&sonypi_device.fifo) != 0);
if (ret)
return ret;
while (ret < count &&
(kfifo_get(sonypi_device.fifo, &c, sizeof(c)) == sizeof(c))) {
(kfifo_out_locked(&sonypi_device.fifo, &c, sizeof(c),
&sonypi_device.fifo_lock) == sizeof(c))) {
if (put_user(c, buf++))
return -EFAULT;
ret++;
@ -946,7 +950,7 @@ static ssize_t sonypi_misc_read(struct file *file, char __user *buf,
static unsigned int sonypi_misc_poll(struct file *file, poll_table *wait)
{
poll_wait(file, &sonypi_device.fifo_proc_list, wait);
if (kfifo_len(sonypi_device.fifo))
if (kfifo_len(&sonypi_device.fifo))
return POLLIN | POLLRDNORM;
return 0;
}
@ -1313,11 +1317,10 @@ static int __devinit sonypi_probe(struct platform_device *dev)
"http://www.linux.it/~malattia/wiki/index.php/Sony_drivers\n");
spin_lock_init(&sonypi_device.fifo_lock);
sonypi_device.fifo = kfifo_alloc(SONYPI_BUF_SIZE, GFP_KERNEL,
&sonypi_device.fifo_lock);
if (IS_ERR(sonypi_device.fifo)) {
error = kfifo_alloc(&sonypi_device.fifo, SONYPI_BUF_SIZE, GFP_KERNEL);
if (error) {
printk(KERN_ERR "sonypi: kfifo_alloc failed\n");
return PTR_ERR(sonypi_device.fifo);
return error;
}
init_waitqueue_head(&sonypi_device.fifo_proc_list);
@ -1393,12 +1396,10 @@ static int __devinit sonypi_probe(struct platform_device *dev)
}
spin_lock_init(&sonypi_device.input_fifo_lock);
sonypi_device.input_fifo =
kfifo_alloc(SONYPI_BUF_SIZE, GFP_KERNEL,
&sonypi_device.input_fifo_lock);
if (IS_ERR(sonypi_device.input_fifo)) {
error = kfifo_alloc(&sonypi_device.input_fifo, SONYPI_BUF_SIZE,
GFP_KERNEL);
if (error) {
printk(KERN_ERR "sonypi: kfifo_alloc failed\n");
error = PTR_ERR(sonypi_device.input_fifo);
goto err_inpdev_unregister;
}
@ -1423,7 +1424,7 @@ static int __devinit sonypi_probe(struct platform_device *dev)
pci_disable_device(pcidev);
err_put_pcidev:
pci_dev_put(pcidev);
kfifo_free(sonypi_device.fifo);
kfifo_free(&sonypi_device.fifo);
return error;
}
@ -1438,7 +1439,7 @@ static int __devexit sonypi_remove(struct platform_device *dev)
if (useinput) {
input_unregister_device(sonypi_device.input_key_dev);
input_unregister_device(sonypi_device.input_jog_dev);
kfifo_free(sonypi_device.input_fifo);
kfifo_free(&sonypi_device.input_fifo);
}
misc_deregister(&sonypi_misc_device);
@ -1451,7 +1452,7 @@ static int __devexit sonypi_remove(struct platform_device *dev)
pci_dev_put(sonypi_device.dev);
}
kfifo_free(sonypi_device.fifo);
kfifo_free(&sonypi_device.fifo);
return 0;
}

View File

@ -434,11 +434,11 @@ static int drm_version(struct drm_device *dev, void *data,
* Looks up the ioctl function in the ::ioctls table, checking for root
* previleges if so required, and dispatches to the respective function.
*/
int drm_ioctl(struct inode *inode, struct file *filp,
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
struct drm_file *file_priv = filp->private_data;
struct drm_device *dev = file_priv->minor->dev;
struct drm_device *dev;
struct drm_ioctl_desc *ioctl;
drm_ioctl_t *func;
unsigned int nr = DRM_IOCTL_NR(cmd);
@ -446,6 +446,7 @@ int drm_ioctl(struct inode *inode, struct file *filp,
char stack_kdata[128];
char *kdata = NULL;
dev = file_priv->minor->dev;
atomic_inc(&dev->ioctl_count);
atomic_inc(&dev->counts[_DRM_STAT_IOCTLS]);
++file_priv->ioctl_count;
@ -501,7 +502,13 @@ int drm_ioctl(struct inode *inode, struct file *filp,
goto err_i1;
}
}
retcode = func(dev, kdata, file_priv);
if (ioctl->flags & DRM_UNLOCKED)
retcode = func(dev, kdata, file_priv);
else {
lock_kernel();
retcode = func(dev, kdata, file_priv);
unlock_kernel();
}
if (cmd & IOC_OUT) {
if (copy_to_user((void __user *)arg, kdata,

View File

@ -913,7 +913,7 @@ static int drm_cvt_modes(struct drm_connector *connector,
const int rates[] = { 60, 85, 75, 60, 50 };
for (i = 0; i < 4; i++) {
int width, height;
int uninitialized_var(width), height;
cvt = &(timing->data.other_data.data.cvt[i]);
height = (cvt->code[0] + ((cvt->code[1] & 0xf0) << 8) + 1) * 2;

View File

@ -104,7 +104,7 @@ static int compat_drm_version(struct file *file, unsigned int cmd,
&version->desc))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
err = drm_ioctl(file,
DRM_IOCTL_VERSION, (unsigned long)version);
if (err)
return err;
@ -145,8 +145,7 @@ static int compat_drm_getunique(struct file *file, unsigned int cmd,
&u->unique))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_GET_UNIQUE, (unsigned long)u);
err = drm_ioctl(file, DRM_IOCTL_GET_UNIQUE, (unsigned long)u);
if (err)
return err;
@ -174,8 +173,7 @@ static int compat_drm_setunique(struct file *file, unsigned int cmd,
&u->unique))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_SET_UNIQUE, (unsigned long)u);
return drm_ioctl(file, DRM_IOCTL_SET_UNIQUE, (unsigned long)u);
}
typedef struct drm_map32 {
@ -205,8 +203,7 @@ static int compat_drm_getmap(struct file *file, unsigned int cmd,
if (__put_user(idx, &map->offset))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_GET_MAP, (unsigned long)map);
err = drm_ioctl(file, DRM_IOCTL_GET_MAP, (unsigned long)map);
if (err)
return err;
@ -246,8 +243,7 @@ static int compat_drm_addmap(struct file *file, unsigned int cmd,
|| __put_user(m32.flags, &map->flags))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_ADD_MAP, (unsigned long)map);
err = drm_ioctl(file, DRM_IOCTL_ADD_MAP, (unsigned long)map);
if (err)
return err;
@ -284,8 +280,7 @@ static int compat_drm_rmmap(struct file *file, unsigned int cmd,
if (__put_user((void *)(unsigned long)handle, &map->handle))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_RM_MAP, (unsigned long)map);
return drm_ioctl(file, DRM_IOCTL_RM_MAP, (unsigned long)map);
}
typedef struct drm_client32 {
@ -314,8 +309,7 @@ static int compat_drm_getclient(struct file *file, unsigned int cmd,
if (__put_user(idx, &client->idx))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_GET_CLIENT, (unsigned long)client);
err = drm_ioctl(file, DRM_IOCTL_GET_CLIENT, (unsigned long)client);
if (err)
return err;
@ -351,8 +345,7 @@ static int compat_drm_getstats(struct file *file, unsigned int cmd,
if (!access_ok(VERIFY_WRITE, stats, sizeof(*stats)))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_GET_STATS, (unsigned long)stats);
err = drm_ioctl(file, DRM_IOCTL_GET_STATS, (unsigned long)stats);
if (err)
return err;
@ -395,8 +388,7 @@ static int compat_drm_addbufs(struct file *file, unsigned int cmd,
|| __put_user(agp_start, &buf->agp_start))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_ADD_BUFS, (unsigned long)buf);
err = drm_ioctl(file, DRM_IOCTL_ADD_BUFS, (unsigned long)buf);
if (err)
return err;
@ -427,8 +419,7 @@ static int compat_drm_markbufs(struct file *file, unsigned int cmd,
|| __put_user(b32.high_mark, &buf->high_mark))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_MARK_BUFS, (unsigned long)buf);
return drm_ioctl(file, DRM_IOCTL_MARK_BUFS, (unsigned long)buf);
}
typedef struct drm_buf_info32 {
@ -469,8 +460,7 @@ static int compat_drm_infobufs(struct file *file, unsigned int cmd,
|| __put_user(list, &request->list))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_INFO_BUFS, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_INFO_BUFS, (unsigned long)request);
if (err)
return err;
@ -531,8 +521,7 @@ static int compat_drm_mapbufs(struct file *file, unsigned int cmd,
|| __put_user(list, &request->list))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_MAP_BUFS, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_MAP_BUFS, (unsigned long)request);
if (err)
return err;
@ -578,8 +567,7 @@ static int compat_drm_freebufs(struct file *file, unsigned int cmd,
&request->list))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_FREE_BUFS, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_FREE_BUFS, (unsigned long)request);
}
typedef struct drm_ctx_priv_map32 {
@ -605,8 +593,7 @@ static int compat_drm_setsareactx(struct file *file, unsigned int cmd,
&request->handle))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_SET_SAREA_CTX, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_SET_SAREA_CTX, (unsigned long)request);
}
static int compat_drm_getsareactx(struct file *file, unsigned int cmd,
@ -628,8 +615,7 @@ static int compat_drm_getsareactx(struct file *file, unsigned int cmd,
if (__put_user(ctx_id, &request->ctx_id))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_GET_SAREA_CTX, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_GET_SAREA_CTX, (unsigned long)request);
if (err)
return err;
@ -664,8 +650,7 @@ static int compat_drm_resctx(struct file *file, unsigned int cmd,
&res->contexts))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_RES_CTX, (unsigned long)res);
err = drm_ioctl(file, DRM_IOCTL_RES_CTX, (unsigned long)res);
if (err)
return err;
@ -718,8 +703,7 @@ static int compat_drm_dma(struct file *file, unsigned int cmd,
&d->request_sizes))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_DMA, (unsigned long)d);
err = drm_ioctl(file, DRM_IOCTL_DMA, (unsigned long)d);
if (err)
return err;
@ -751,8 +735,7 @@ static int compat_drm_agp_enable(struct file *file, unsigned int cmd,
if (put_user(m32.mode, &mode->mode))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_ENABLE, (unsigned long)mode);
return drm_ioctl(file, DRM_IOCTL_AGP_ENABLE, (unsigned long)mode);
}
typedef struct drm_agp_info32 {
@ -781,8 +764,7 @@ static int compat_drm_agp_info(struct file *file, unsigned int cmd,
if (!access_ok(VERIFY_WRITE, info, sizeof(*info)))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_INFO, (unsigned long)info);
err = drm_ioctl(file, DRM_IOCTL_AGP_INFO, (unsigned long)info);
if (err)
return err;
@ -827,16 +809,14 @@ static int compat_drm_agp_alloc(struct file *file, unsigned int cmd,
|| __put_user(req32.type, &request->type))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_ALLOC, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_AGP_ALLOC, (unsigned long)request);
if (err)
return err;
if (__get_user(req32.handle, &request->handle)
|| __get_user(req32.physical, &request->physical)
|| copy_to_user(argp, &req32, sizeof(req32))) {
drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_FREE, (unsigned long)request);
drm_ioctl(file, DRM_IOCTL_AGP_FREE, (unsigned long)request);
return -EFAULT;
}
@ -856,8 +836,7 @@ static int compat_drm_agp_free(struct file *file, unsigned int cmd,
|| __put_user(handle, &request->handle))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_FREE, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_AGP_FREE, (unsigned long)request);
}
typedef struct drm_agp_binding32 {
@ -881,8 +860,7 @@ static int compat_drm_agp_bind(struct file *file, unsigned int cmd,
|| __put_user(req32.offset, &request->offset))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_BIND, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_AGP_BIND, (unsigned long)request);
}
static int compat_drm_agp_unbind(struct file *file, unsigned int cmd,
@ -898,8 +876,7 @@ static int compat_drm_agp_unbind(struct file *file, unsigned int cmd,
|| __put_user(handle, &request->handle))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_AGP_UNBIND, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_AGP_UNBIND, (unsigned long)request);
}
#endif /* __OS_HAS_AGP */
@ -923,8 +900,7 @@ static int compat_drm_sg_alloc(struct file *file, unsigned int cmd,
|| __put_user(x, &request->size))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_SG_ALLOC, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_SG_ALLOC, (unsigned long)request);
if (err)
return err;
@ -950,8 +926,7 @@ static int compat_drm_sg_free(struct file *file, unsigned int cmd,
|| __put_user(x << PAGE_SHIFT, &request->handle))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_SG_FREE, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_SG_FREE, (unsigned long)request);
}
#if defined(CONFIG_X86) || defined(CONFIG_IA64)
@ -981,8 +956,7 @@ static int compat_drm_update_draw(struct file *file, unsigned int cmd,
__put_user(update32.data, &request->data))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_UPDATE_DRAW, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_UPDATE_DRAW, (unsigned long)request);
return err;
}
#endif
@ -1023,8 +997,7 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
|| __put_user(req32.request.signal, &request->request.signal))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_WAIT_VBLANK, (unsigned long)request);
err = drm_ioctl(file, DRM_IOCTL_WAIT_VBLANK, (unsigned long)request);
if (err)
return err;
@ -1094,16 +1067,14 @@ long drm_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
* than always failing.
*/
if (nr >= ARRAY_SIZE(drm_compat_ioctls))
return drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg);
return drm_ioctl(filp, cmd, arg);
fn = drm_compat_ioctls[nr];
lock_kernel(); /* XXX for now */
if (fn != NULL)
ret = (*fn) (filp, cmd, arg);
else
ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg);
unlock_kernel();
ret = drm_ioctl(filp, cmd, arg);
return ret;
}

View File

@ -358,7 +358,7 @@ struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm,
if (entry->size >= size + wasted) {
if (!best_match)
return entry;
if (size < best_size) {
if (entry->size < best_size) {
best = entry;
best_size = entry->size;
}
@ -408,7 +408,7 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm,
if (entry->size >= size + wasted) {
if (!best_match)
return entry;
if (size < best_size) {
if (entry->size < best_size) {
best = entry;
best_size = entry->size;
}

View File

@ -408,6 +408,11 @@ static int ch7006_probe(struct i2c_client *client, const struct i2c_device_id *i
ch7006_info(client, "Detected version ID: %x\n", val);
/* I don't know what this is for, but otherwise I get no
* signal.
*/
ch7006_write(client, 0x3d, 0x0);
return 0;
fail:

View File

@ -427,11 +427,6 @@ void ch7006_state_load(struct i2c_client *client,
ch7006_load_reg(client, state, CH7006_SUBC_INC7);
ch7006_load_reg(client, state, CH7006_PLL_CONTROL);
ch7006_load_reg(client, state, CH7006_CALC_SUBC_INC0);
/* I don't know what this is for, but otherwise I get no
* signal.
*/
ch7006_write(client, 0x3d, 0x0);
}
void ch7006_state_save(struct i2c_client *client,

View File

@ -115,7 +115,7 @@ static int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
static const struct file_operations i810_buffer_fops = {
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = i810_mmap_buffers,
.fasync = drm_fasync,
};

View File

@ -59,7 +59,7 @@ static struct drm_driver driver = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = drm_mmap,
.poll = drm_poll,
.fasync = drm_fasync,

View File

@ -117,7 +117,7 @@ static int i830_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
static const struct file_operations i830_buffer_fops = {
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = i830_mmap_buffers,
.fasync = drm_fasync,
};

View File

@ -70,7 +70,7 @@ static struct drm_driver driver = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = drm_mmap,
.poll = drm_poll,
.fasync = drm_fasync,

View File

@ -329,7 +329,7 @@ static struct drm_driver driver = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = drm_gem_mmap,
.poll = drm_poll,
.fasync = drm_fasync,

View File

@ -66,8 +66,7 @@ static int compat_i915_batchbuffer(struct file *file, unsigned int cmd,
&batchbuffer->cliprects))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_I915_BATCHBUFFER,
return drm_ioctl(file, DRM_IOCTL_I915_BATCHBUFFER,
(unsigned long)batchbuffer);
}
@ -102,8 +101,8 @@ static int compat_i915_cmdbuffer(struct file *file, unsigned int cmd,
&cmdbuffer->cliprects))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_I915_CMDBUFFER, (unsigned long)cmdbuffer);
return drm_ioctl(file, DRM_IOCTL_I915_CMDBUFFER,
(unsigned long)cmdbuffer);
}
typedef struct drm_i915_irq_emit32 {
@ -125,8 +124,8 @@ static int compat_i915_irq_emit(struct file *file, unsigned int cmd,
&request->irq_seq))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_I915_IRQ_EMIT, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_I915_IRQ_EMIT,
(unsigned long)request);
}
typedef struct drm_i915_getparam32 {
int param;
@ -149,8 +148,8 @@ static int compat_i915_getparam(struct file *file, unsigned int cmd,
&request->value))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_I915_GETPARAM, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_I915_GETPARAM,
(unsigned long)request);
}
typedef struct drm_i915_mem_alloc32 {
@ -178,8 +177,8 @@ static int compat_i915_alloc(struct file *file, unsigned int cmd,
&request->region_offset))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_I915_ALLOC, (unsigned long)request);
return drm_ioctl(file, DRM_IOCTL_I915_ALLOC,
(unsigned long)request);
}
drm_ioctl_compat_t *i915_compat_ioctls[] = {
@ -211,12 +210,10 @@ long i915_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
if (nr < DRM_COMMAND_BASE + DRM_ARRAY_SIZE(i915_compat_ioctls))
fn = i915_compat_ioctls[nr - DRM_COMMAND_BASE];
lock_kernel(); /* XXX for now */
if (fn != NULL)
ret = (*fn) (filp, cmd, arg);
else
ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg);
unlock_kernel();
ret = drm_ioctl(filp, cmd, arg);
return ret;
}

View File

@ -68,7 +68,7 @@ static struct drm_driver driver = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = drm_mmap,
.poll = drm_poll,
.fasync = drm_fasync,

View File

@ -100,8 +100,7 @@ static int compat_mga_init(struct file *file, unsigned int cmd,
if (err)
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_MGA_INIT, (unsigned long)init);
return drm_ioctl(file, DRM_IOCTL_MGA_INIT, (unsigned long)init);
}
typedef struct drm_mga_getparam32 {
@ -125,8 +124,7 @@ static int compat_mga_getparam(struct file *file, unsigned int cmd,
&getparam->value))
return -EFAULT;
return drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_MGA_GETPARAM, (unsigned long)getparam);
return drm_ioctl(file, DRM_IOCTL_MGA_GETPARAM, (unsigned long)getparam);
}
typedef struct drm_mga_drm_bootstrap32 {
@ -166,8 +164,7 @@ static int compat_mga_dma_bootstrap(struct file *file, unsigned int cmd,
|| __put_user(dma_bootstrap32.agp_size, &dma_bootstrap->agp_size))
return -EFAULT;
err = drm_ioctl(file->f_path.dentry->d_inode, file,
DRM_IOCTL_MGA_DMA_BOOTSTRAP,
err = drm_ioctl(file, DRM_IOCTL_MGA_DMA_BOOTSTRAP,
(unsigned long)dma_bootstrap);
if (err)
return err;
@ -220,12 +217,10 @@ long mga_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
if (nr < DRM_COMMAND_BASE + DRM_ARRAY_SIZE(mga_compat_ioctls))
fn = mga_compat_ioctls[nr - DRM_COMMAND_BASE];
lock_kernel(); /* XXX for now */
if (fn != NULL)
ret = (*fn) (filp, cmd, arg);
else
ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg);
unlock_kernel();
ret = drm_ioctl(filp, cmd, arg);
return ret;
}

View File

@ -8,14 +8,15 @@ nouveau-y := nouveau_drv.o nouveau_state.o nouveau_channel.o nouveau_mem.o \
nouveau_sgdma.o nouveau_dma.o \
nouveau_bo.o nouveau_fence.o nouveau_gem.o nouveau_ttm.o \
nouveau_hw.o nouveau_calc.o nouveau_bios.o nouveau_i2c.o \
nouveau_display.o nouveau_connector.o nouveau_fbcon.o \
nouveau_dp.o \
nouveau_display.o nouveau_connector.o nouveau_fbcon.o \
nouveau_dp.o nouveau_grctx.o \
nv04_timer.o \
nv04_mc.o nv40_mc.o nv50_mc.o \
nv04_fb.o nv10_fb.o nv40_fb.o \
nv04_fifo.o nv10_fifo.o nv40_fifo.o nv50_fifo.o \
nv04_graph.o nv10_graph.o nv20_graph.o \
nv40_graph.o nv50_graph.o \
nv40_grctx.o \
nv04_instmem.o nv50_instmem.o \
nv50_crtc.o nv50_dac.o nv50_sor.o \
nv50_cursor.o nv50_display.o nv50_fbcon.o \

File diff suppressed because it is too large Load Diff

View File

@ -227,6 +227,7 @@ struct nvbios {
uint16_t pll_limit_tbl_ptr;
uint16_t ram_restrict_tbl_ptr;
uint8_t ram_restrict_group_count;
uint16_t some_script_ptr; /* BIT I + 14 */
uint16_t init96_tbl_ptr; /* BIT I + 16 */

View File

@ -154,6 +154,11 @@ nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t memtype)
nvbo->placement.busy_placement = nvbo->placements;
nvbo->placement.num_placement = n;
nvbo->placement.num_busy_placement = n;
if (nvbo->pin_refcnt) {
while (n--)
nvbo->placements[n] |= TTM_PL_FLAG_NO_EVICT;
}
}
int
@ -400,10 +405,16 @@ nouveau_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
struct nouveau_bo *nvbo = nouveau_bo(bo);
switch (bo->mem.mem_type) {
case TTM_PL_VRAM:
nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_TT |
TTM_PL_FLAG_SYSTEM);
break;
default:
nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_SYSTEM);
break;
}
*pl = nvbo->placement;
}
@ -455,11 +466,8 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, int no_wait,
int ret;
chan = nvbo->channel;
if (!chan || nvbo->tile_flags || nvbo->no_vm) {
if (!chan || nvbo->tile_flags || nvbo->no_vm)
chan = dev_priv->channel;
if (!chan)
return -EINVAL;
}
src_offset = old_mem->mm_node->start << PAGE_SHIFT;
dst_offset = new_mem->mm_node->start << PAGE_SHIFT;
@ -625,7 +633,8 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr,
return ret;
}
if (dev_priv->init_state != NOUVEAU_CARD_INIT_DONE)
if (dev_priv->init_state != NOUVEAU_CARD_INIT_DONE ||
!dev_priv->channel)
return ttm_bo_move_memcpy(bo, evict, no_wait, new_mem);
if (old_mem->mem_type == TTM_PL_SYSTEM && !bo->ttm) {

View File

@ -86,7 +86,7 @@ nouveau_connector_destroy(struct drm_connector *drm_connector)
struct nouveau_connector *connector = nouveau_connector(drm_connector);
struct drm_device *dev = connector->base.dev;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
if (!connector)
return;
@ -420,7 +420,7 @@ nouveau_connector_native_mode(struct nouveau_connector *connector)
/* Use preferred mode if there is one.. */
list_for_each_entry(mode, &connector->base.probed_modes, head) {
if (mode->type & DRM_MODE_TYPE_PREFERRED) {
NV_DEBUG(dev, "native mode from preferred\n");
NV_DEBUG_KMS(dev, "native mode from preferred\n");
return drm_mode_duplicate(dev, mode);
}
}
@ -445,7 +445,7 @@ nouveau_connector_native_mode(struct nouveau_connector *connector)
largest = mode;
}
NV_DEBUG(dev, "native mode from largest: %dx%d@%d\n",
NV_DEBUG_KMS(dev, "native mode from largest: %dx%d@%d\n",
high_w, high_h, high_v);
return largest ? drm_mode_duplicate(dev, largest) : NULL;
}
@ -725,7 +725,7 @@ nouveau_connector_create(struct drm_device *dev, int index, int type)
struct drm_encoder *encoder;
int ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
nv_connector = kzalloc(sizeof(*nv_connector), GFP_KERNEL);
if (!nv_connector)

View File

@ -187,7 +187,7 @@ nouveau_dp_link_train_adjust(struct drm_encoder *encoder, uint8_t *config)
if (ret)
return false;
NV_DEBUG(dev, "\t\tadjust 0x%02x 0x%02x\n", request[0], request[1]);
NV_DEBUG_KMS(dev, "\t\tadjust 0x%02x 0x%02x\n", request[0], request[1]);
/* Keep all lanes at the same level.. */
for (i = 0; i < nv_encoder->dp.link_nr; i++) {
@ -228,7 +228,7 @@ nouveau_dp_link_train_commit(struct drm_encoder *encoder, uint8_t *config)
int or = nv_encoder->or, link = !(nv_encoder->dcb->sorconf.link & 1);
int dpe_headerlen, ret, i;
NV_DEBUG(dev, "\t\tconfig 0x%02x 0x%02x 0x%02x 0x%02x\n",
NV_DEBUG_KMS(dev, "\t\tconfig 0x%02x 0x%02x 0x%02x 0x%02x\n",
config[0], config[1], config[2], config[3]);
dpe = nouveau_bios_dp_table(dev, nv_encoder->dcb, &dpe_headerlen);
@ -276,12 +276,12 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
bool cr_done, cr_max_vs, eq_done;
int ret = 0, i, tries, voltage;
NV_DEBUG(dev, "link training!!\n");
NV_DEBUG_KMS(dev, "link training!!\n");
train:
cr_done = eq_done = false;
/* set link configuration */
NV_DEBUG(dev, "\tbegin train: bw %d, lanes %d\n",
NV_DEBUG_KMS(dev, "\tbegin train: bw %d, lanes %d\n",
nv_encoder->dp.link_bw, nv_encoder->dp.link_nr);
ret = nouveau_dp_link_bw_set(encoder, nv_encoder->dp.link_bw);
@ -297,7 +297,7 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
return false;
/* clock recovery */
NV_DEBUG(dev, "\tbegin cr\n");
NV_DEBUG_KMS(dev, "\tbegin cr\n");
ret = nouveau_dp_link_train_set(encoder, DP_TRAINING_PATTERN_1);
if (ret)
goto stop;
@ -314,7 +314,7 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
ret = auxch_rd(encoder, DP_LANE0_1_STATUS, status, 2);
if (ret)
break;
NV_DEBUG(dev, "\t\tstatus: 0x%02x 0x%02x\n",
NV_DEBUG_KMS(dev, "\t\tstatus: 0x%02x 0x%02x\n",
status[0], status[1]);
cr_done = true;
@ -346,7 +346,7 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
goto stop;
/* channel equalisation */
NV_DEBUG(dev, "\tbegin eq\n");
NV_DEBUG_KMS(dev, "\tbegin eq\n");
ret = nouveau_dp_link_train_set(encoder, DP_TRAINING_PATTERN_2);
if (ret)
goto stop;
@ -357,7 +357,7 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
ret = auxch_rd(encoder, DP_LANE0_1_STATUS, status, 3);
if (ret)
break;
NV_DEBUG(dev, "\t\tstatus: 0x%02x 0x%02x\n",
NV_DEBUG_KMS(dev, "\t\tstatus: 0x%02x 0x%02x\n",
status[0], status[1]);
eq_done = true;
@ -395,9 +395,9 @@ nouveau_dp_link_train(struct drm_encoder *encoder)
/* retry at a lower setting, if possible */
if (!ret && !(eq_done && cr_done)) {
NV_DEBUG(dev, "\twe failed\n");
NV_DEBUG_KMS(dev, "\twe failed\n");
if (nv_encoder->dp.link_bw != DP_LINK_BW_1_62) {
NV_DEBUG(dev, "retry link training at low rate\n");
NV_DEBUG_KMS(dev, "retry link training at low rate\n");
nv_encoder->dp.link_bw = DP_LINK_BW_1_62;
goto train;
}
@ -418,7 +418,7 @@ nouveau_dp_detect(struct drm_encoder *encoder)
if (ret)
return false;
NV_DEBUG(dev, "encoder: link_bw %d, link_nr %d\n"
NV_DEBUG_KMS(dev, "encoder: link_bw %d, link_nr %d\n"
"display: link_bw %d, link_nr %d version 0x%02x\n",
nv_encoder->dcb->dpconf.link_bw,
nv_encoder->dcb->dpconf.link_nr,
@ -446,7 +446,7 @@ nouveau_dp_auxch(struct nouveau_i2c_chan *auxch, int cmd, int addr,
uint32_t tmp, ctrl, stat = 0, data32[4] = {};
int ret = 0, i, index = auxch->rd;
NV_DEBUG(dev, "ch %d cmd %d addr 0x%x len %d\n", index, cmd, addr, data_nr);
NV_DEBUG_KMS(dev, "ch %d cmd %d addr 0x%x len %d\n", index, cmd, addr, data_nr);
tmp = nv_rd32(dev, NV50_AUXCH_CTRL(auxch->rd));
nv_wr32(dev, NV50_AUXCH_CTRL(auxch->rd), tmp | 0x00100000);
@ -472,7 +472,7 @@ nouveau_dp_auxch(struct nouveau_i2c_chan *auxch, int cmd, int addr,
if (!(cmd & 1)) {
memcpy(data32, data, data_nr);
for (i = 0; i < 4; i++) {
NV_DEBUG(dev, "wr %d: 0x%08x\n", i, data32[i]);
NV_DEBUG_KMS(dev, "wr %d: 0x%08x\n", i, data32[i]);
nv_wr32(dev, NV50_AUXCH_DATA_OUT(index, i), data32[i]);
}
}
@ -504,7 +504,7 @@ nouveau_dp_auxch(struct nouveau_i2c_chan *auxch, int cmd, int addr,
if (cmd & 1) {
for (i = 0; i < 4; i++) {
data32[i] = nv_rd32(dev, NV50_AUXCH_DATA_IN(index, i));
NV_DEBUG(dev, "rd %d: 0x%08x\n", i, data32[i]);
NV_DEBUG_KMS(dev, "rd %d: 0x%08x\n", i, data32[i]);
}
memcpy(data, data32, data_nr);
}

View File

@ -35,6 +35,10 @@
#include "drm_pciids.h"
MODULE_PARM_DESC(ctxfw, "Use external firmware blob for grctx init (NV40)");
int nouveau_ctxfw = 0;
module_param_named(ctxfw, nouveau_ctxfw, int, 0400);
MODULE_PARM_DESC(noagp, "Disable AGP");
int nouveau_noagp;
module_param_named(noagp, nouveau_noagp, int, 0400);
@ -273,7 +277,7 @@ nouveau_pci_resume(struct pci_dev *pdev)
for (i = 0; i < dev_priv->engine.fifo.channels; i++) {
chan = dev_priv->fifos[i];
if (!chan)
if (!chan || !chan->pushbuf_bo)
continue;
for (j = 0; j < NOUVEAU_DMA_SKIPS; j++)
@ -341,7 +345,7 @@ static struct drm_driver driver = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.unlocked_ioctl = drm_ioctl,
.mmap = nouveau_ttm_mmap,
.poll = drm_poll,
.fasync = drm_fasync,

View File

@ -54,6 +54,7 @@ struct nouveau_fpriv {
#include "nouveau_drm.h"
#include "nouveau_reg.h"
#include "nouveau_bios.h"
struct nouveau_grctx;
#define MAX_NUM_DCB_ENTRIES 16
@ -317,6 +318,7 @@ struct nouveau_pgraph_engine {
bool accel_blocked;
void *ctxprog;
void *ctxvals;
int grctx_size;
int (*init)(struct drm_device *);
void (*takedown)(struct drm_device *);
@ -647,6 +649,7 @@ extern int nouveau_fbpercrtc;
extern char *nouveau_tv_norm;
extern int nouveau_reg_debug;
extern char *nouveau_vbios;
extern int nouveau_ctxfw;
/* nouveau_state.c */
extern void nouveau_preclose(struct drm_device *dev, struct drm_file *);
@ -959,9 +962,7 @@ extern int nv40_graph_create_context(struct nouveau_channel *);
extern void nv40_graph_destroy_context(struct nouveau_channel *);
extern int nv40_graph_load_context(struct nouveau_channel *);
extern int nv40_graph_unload_context(struct drm_device *);
extern int nv40_grctx_init(struct drm_device *);
extern void nv40_grctx_fini(struct drm_device *);
extern void nv40_grctx_vals_load(struct drm_device *, struct nouveau_gpuobj *);
extern void nv40_grctx_init(struct nouveau_grctx *);
/* nv50_graph.c */
extern struct nouveau_pgraph_object_class nv50_graph_grclass[];
@ -975,6 +976,12 @@ extern int nv50_graph_load_context(struct nouveau_channel *);
extern int nv50_graph_unload_context(struct drm_device *);
extern void nv50_graph_context_switch(struct drm_device *);
/* nouveau_grctx.c */
extern int nouveau_grctx_prog_load(struct drm_device *);
extern void nouveau_grctx_vals_load(struct drm_device *,
struct nouveau_gpuobj *);
extern void nouveau_grctx_fini(struct drm_device *);
/* nv04_instmem.c */
extern int nv04_instmem_init(struct drm_device *);
extern void nv04_instmem_takedown(struct drm_device *);
@ -1207,14 +1214,24 @@ static inline void nv_wo32(struct drm_device *dev, struct nouveau_gpuobj *obj,
pci_name(d->pdev), ##arg)
#ifndef NV_DEBUG_NOTRACE
#define NV_DEBUG(d, fmt, arg...) do { \
if (drm_debug) { \
if (drm_debug & DRM_UT_DRIVER) { \
NV_PRINTK(KERN_DEBUG, d, "%s:%d - " fmt, __func__, \
__LINE__, ##arg); \
} \
} while (0)
#define NV_DEBUG_KMS(d, fmt, arg...) do { \
if (drm_debug & DRM_UT_KMS) { \
NV_PRINTK(KERN_DEBUG, d, "%s:%d - " fmt, __func__, \
__LINE__, ##arg); \
} \
} while (0)
#else
#define NV_DEBUG(d, fmt, arg...) do { \
if (drm_debug) \
if (drm_debug & DRM_UT_DRIVER) \
NV_PRINTK(KERN_DEBUG, d, fmt, ##arg); \
} while (0)
#define NV_DEBUG_KMS(d, fmt, arg...) do { \
if (drm_debug & DRM_UT_KMS) \
NV_PRINTK(KERN_DEBUG, d, fmt, ##arg); \
} while (0)
#endif

View File

@ -58,7 +58,7 @@ nouveau_fbcon_sync(struct fb_info *info)
struct nouveau_channel *chan = dev_priv->channel;
int ret, i;
if (!chan->accel_done ||
if (!chan || !chan->accel_done ||
info->state != FBINFO_STATE_RUNNING ||
info->flags & FBINFO_HWACCEL_DISABLED)
return 0;
@ -318,14 +318,16 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
par->nouveau_fb = nouveau_fb;
par->dev = dev;
switch (dev_priv->card_type) {
case NV_50:
nv50_fbcon_accel_init(info);
break;
default:
nv04_fbcon_accel_init(info);
break;
};
if (dev_priv->channel) {
switch (dev_priv->card_type) {
case NV_50:
nv50_fbcon_accel_init(info);
break;
default:
nv04_fbcon_accel_init(info);
break;
};
}
nouveau_fbcon_zfill(dev);
@ -347,7 +349,7 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
int
nouveau_fbcon_probe(struct drm_device *dev)
{
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
return drm_fb_helper_single_fb_probe(dev, 32, nouveau_fbcon_create);
}

View File

@ -0,0 +1,161 @@
/*
* Copyright 2009 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <linux/firmware.h>
#include "drmP.h"
#include "nouveau_drv.h"
struct nouveau_ctxprog {
uint32_t signature;
uint8_t version;
uint16_t length;
uint32_t data[];
} __attribute__ ((packed));
struct nouveau_ctxvals {
uint32_t signature;
uint8_t version;
uint32_t length;
struct {
uint32_t offset;
uint32_t value;
} data[];
} __attribute__ ((packed));
int
nouveau_grctx_prog_load(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
const int chipset = dev_priv->chipset;
const struct firmware *fw;
const struct nouveau_ctxprog *cp;
const struct nouveau_ctxvals *cv;
char name[32];
int ret, i;
if (pgraph->accel_blocked)
return -ENODEV;
if (!pgraph->ctxprog) {
sprintf(name, "nouveau/nv%02x.ctxprog", chipset);
ret = request_firmware(&fw, name, &dev->pdev->dev);
if (ret) {
NV_ERROR(dev, "No ctxprog for NV%02x\n", chipset);
return ret;
}
pgraph->ctxprog = kmalloc(fw->size, GFP_KERNEL);
if (!pgraph->ctxprog) {
NV_ERROR(dev, "OOM copying ctxprog\n");
release_firmware(fw);
return -ENOMEM;
}
memcpy(pgraph->ctxprog, fw->data, fw->size);
cp = pgraph->ctxprog;
if (le32_to_cpu(cp->signature) != 0x5043564e ||
cp->version != 0 ||
le16_to_cpu(cp->length) != ((fw->size - 7) / 4)) {
NV_ERROR(dev, "ctxprog invalid\n");
release_firmware(fw);
nouveau_grctx_fini(dev);
return -EINVAL;
}
release_firmware(fw);
}
if (!pgraph->ctxvals) {
sprintf(name, "nouveau/nv%02x.ctxvals", chipset);
ret = request_firmware(&fw, name, &dev->pdev->dev);
if (ret) {
NV_ERROR(dev, "No ctxvals for NV%02x\n", chipset);
nouveau_grctx_fini(dev);
return ret;
}
pgraph->ctxvals = kmalloc(fw->size, GFP_KERNEL);
if (!pgraph->ctxprog) {
NV_ERROR(dev, "OOM copying ctxprog\n");
release_firmware(fw);
nouveau_grctx_fini(dev);
return -ENOMEM;
}
memcpy(pgraph->ctxvals, fw->data, fw->size);
cv = (void *)pgraph->ctxvals;
if (le32_to_cpu(cv->signature) != 0x5643564e ||
cv->version != 0 ||
le32_to_cpu(cv->length) != ((fw->size - 9) / 8)) {
NV_ERROR(dev, "ctxvals invalid\n");
release_firmware(fw);
nouveau_grctx_fini(dev);
return -EINVAL;
}
release_firmware(fw);
}
cp = pgraph->ctxprog;
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_INDEX, 0);
for (i = 0; i < le16_to_cpu(cp->length); i++)
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_DATA,
le32_to_cpu(cp->data[i]));
return 0;
}
void
nouveau_grctx_fini(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
if (pgraph->ctxprog) {
kfree(pgraph->ctxprog);
pgraph->ctxprog = NULL;
}
if (pgraph->ctxvals) {
kfree(pgraph->ctxprog);
pgraph->ctxvals = NULL;
}
}
void
nouveau_grctx_vals_load(struct drm_device *dev, struct nouveau_gpuobj *ctx)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
struct nouveau_ctxvals *cv = pgraph->ctxvals;
int i;
if (!cv)
return;
for (i = 0; i < le32_to_cpu(cv->length); i++)
nv_wo32(dev, ctx, le32_to_cpu(cv->data[i].offset),
le32_to_cpu(cv->data[i].value));
}

View File

@ -0,0 +1,133 @@
#ifndef __NOUVEAU_GRCTX_H__
#define __NOUVEAU_GRCTX_H__
struct nouveau_grctx {
struct drm_device *dev;
enum {
NOUVEAU_GRCTX_PROG,
NOUVEAU_GRCTX_VALS
} mode;
void *data;
uint32_t ctxprog_max;
uint32_t ctxprog_len;
uint32_t ctxprog_reg;
int ctxprog_label[32];
uint32_t ctxvals_pos;
uint32_t ctxvals_base;
};
#ifdef CP_CTX
static inline void
cp_out(struct nouveau_grctx *ctx, uint32_t inst)
{
uint32_t *ctxprog = ctx->data;
if (ctx->mode != NOUVEAU_GRCTX_PROG)
return;
BUG_ON(ctx->ctxprog_len == ctx->ctxprog_max);
ctxprog[ctx->ctxprog_len++] = inst;
}
static inline void
cp_lsr(struct nouveau_grctx *ctx, uint32_t val)
{
cp_out(ctx, CP_LOAD_SR | val);
}
static inline void
cp_ctx(struct nouveau_grctx *ctx, uint32_t reg, uint32_t length)
{
ctx->ctxprog_reg = (reg - 0x00400000) >> 2;
ctx->ctxvals_base = ctx->ctxvals_pos;
ctx->ctxvals_pos = ctx->ctxvals_base + length;
if (length > (CP_CTX_COUNT >> CP_CTX_COUNT_SHIFT)) {
cp_lsr(ctx, length);
length = 0;
}
cp_out(ctx, CP_CTX | (length << CP_CTX_COUNT_SHIFT) | ctx->ctxprog_reg);
}
static inline void
cp_name(struct nouveau_grctx *ctx, int name)
{
uint32_t *ctxprog = ctx->data;
int i;
if (ctx->mode != NOUVEAU_GRCTX_PROG)
return;
ctx->ctxprog_label[name] = ctx->ctxprog_len;
for (i = 0; i < ctx->ctxprog_len; i++) {
if ((ctxprog[i] & 0xfff00000) != 0xff400000)
continue;
if ((ctxprog[i] & CP_BRA_IP) != ((name) << CP_BRA_IP_SHIFT))
continue;
ctxprog[i] = (ctxprog[i] & 0x00ff00ff) |
(ctx->ctxprog_len << CP_BRA_IP_SHIFT);
}
}
static inline void
_cp_bra(struct nouveau_grctx *ctx, u32 mod, int flag, int state, int name)
{
int ip = 0;
if (mod != 2) {
ip = ctx->ctxprog_label[name] << CP_BRA_IP_SHIFT;
if (ip == 0)
ip = 0xff000000 | (name << CP_BRA_IP_SHIFT);
}
cp_out(ctx, CP_BRA | (mod << 18) | ip | flag |
(state ? 0 : CP_BRA_IF_CLEAR));
}
#define cp_bra(c,f,s,n) _cp_bra((c), 0, CP_FLAG_##f, CP_FLAG_##f##_##s, n)
#ifdef CP_BRA_MOD
#define cp_cal(c,f,s,n) _cp_bra((c), 1, CP_FLAG_##f, CP_FLAG_##f##_##s, n)
#define cp_ret(c,f,s) _cp_bra((c), 2, CP_FLAG_##f, CP_FLAG_##f##_##s, 0)
#endif
static inline void
_cp_wait(struct nouveau_grctx *ctx, int flag, int state)
{
cp_out(ctx, CP_WAIT | flag | (state ? CP_WAIT_SET : 0));
}
#define cp_wait(c,f,s) _cp_wait((c), CP_FLAG_##f, CP_FLAG_##f##_##s)
static inline void
_cp_set(struct nouveau_grctx *ctx, int flag, int state)
{
cp_out(ctx, CP_SET | flag | (state ? CP_SET_1 : 0));
}
#define cp_set(c,f,s) _cp_set((c), CP_FLAG_##f, CP_FLAG_##f##_##s)
static inline void
cp_pos(struct nouveau_grctx *ctx, int offset)
{
ctx->ctxvals_pos = offset;
ctx->ctxvals_base = ctx->ctxvals_pos;
cp_lsr(ctx, ctx->ctxvals_pos);
cp_out(ctx, CP_SET_CONTEXT_POINTER);
}
static inline void
gr_def(struct nouveau_grctx *ctx, uint32_t reg, uint32_t val)
{
if (ctx->mode != NOUVEAU_GRCTX_VALS)
return;
reg = (reg - 0x00400000) / 4;
reg = (reg - ctx->ctxprog_reg) + ctx->ctxvals_base;
nv_wo32(ctx->dev, ctx->data, reg, val);
}
#endif
#endif

View File

@ -61,12 +61,10 @@ long nouveau_compat_ioctl(struct file *filp, unsigned int cmd,
if (nr < DRM_COMMAND_BASE + DRM_ARRAY_SIZE(mga_compat_ioctls))
fn = nouveau_compat_ioctls[nr - DRM_COMMAND_BASE];
#endif
lock_kernel(); /* XXX for now */
if (fn != NULL)
ret = (*fn)(filp, cmd, arg);
else
ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg);
unlock_kernel();
ret = drm_ioctl(filp, cmd, arg);
return ret;
}

View File

@ -299,94 +299,13 @@ nouveau_vga_set_decode(void *priv, bool state)
return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
}
int
nouveau_card_init(struct drm_device *dev)
static int
nouveau_card_init_channel(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_engine *engine;
struct nouveau_gpuobj *gpuobj;
int ret;
NV_DEBUG(dev, "prev state = %d\n", dev_priv->init_state);
if (dev_priv->init_state == NOUVEAU_CARD_INIT_DONE)
return 0;
vga_client_register(dev->pdev, dev, NULL, nouveau_vga_set_decode);
/* Initialise internal driver API hooks */
ret = nouveau_init_engine_ptrs(dev);
if (ret)
return ret;
engine = &dev_priv->engine;
dev_priv->init_state = NOUVEAU_CARD_INIT_FAILED;
/* Parse BIOS tables / Run init tables if card not POSTed */
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = nouveau_bios_init(dev);
if (ret)
return ret;
}
ret = nouveau_gpuobj_early_init(dev);
if (ret)
return ret;
/* Initialise instance memory, must happen before mem_init so we
* know exactly how much VRAM we're able to use for "normal"
* purposes.
*/
ret = engine->instmem.init(dev);
if (ret)
return ret;
/* Setup the memory manager */
ret = nouveau_mem_init(dev);
if (ret)
return ret;
ret = nouveau_gpuobj_init(dev);
if (ret)
return ret;
/* PMC */
ret = engine->mc.init(dev);
if (ret)
return ret;
/* PTIMER */
ret = engine->timer.init(dev);
if (ret)
return ret;
/* PFB */
ret = engine->fb.init(dev);
if (ret)
return ret;
/* PGRAPH */
ret = engine->graph.init(dev);
if (ret)
return ret;
/* PFIFO */
ret = engine->fifo.init(dev);
if (ret)
return ret;
/* this call irq_preinstall, register irq handler and
* call irq_postinstall
*/
ret = drm_irq_install(dev);
if (ret)
return ret;
ret = drm_vblank_init(dev, 0);
if (ret)
return ret;
/* what about PVIDEO/PCRTC/PRAMDAC etc? */
ret = nouveau_channel_alloc(dev, &dev_priv->channel,
(struct drm_file *)-2,
NvDmaFB, NvDmaTT);
@ -399,39 +318,133 @@ nouveau_card_init(struct drm_device *dev)
NV_DMA_ACCESS_RW, NV_DMA_TARGET_VIDMEM,
&gpuobj);
if (ret)
return ret;
goto out_err;
ret = nouveau_gpuobj_ref_add(dev, dev_priv->channel, NvDmaVRAM,
gpuobj, NULL);
if (ret) {
nouveau_gpuobj_del(dev, &gpuobj);
return ret;
}
if (ret)
goto out_err;
gpuobj = NULL;
ret = nouveau_gpuobj_gart_dma_new(dev_priv->channel, 0,
dev_priv->gart_info.aper_size,
NV_DMA_ACCESS_RW, &gpuobj, NULL);
if (ret)
return ret;
goto out_err;
ret = nouveau_gpuobj_ref_add(dev, dev_priv->channel, NvDmaGART,
gpuobj, NULL);
if (ret) {
nouveau_gpuobj_del(dev, &gpuobj);
return ret;
if (ret)
goto out_err;
return 0;
out_err:
nouveau_gpuobj_del(dev, &gpuobj);
nouveau_channel_free(dev_priv->channel);
dev_priv->channel = NULL;
return ret;
}
int
nouveau_card_init(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_engine *engine;
int ret;
NV_DEBUG(dev, "prev state = %d\n", dev_priv->init_state);
if (dev_priv->init_state == NOUVEAU_CARD_INIT_DONE)
return 0;
vga_client_register(dev->pdev, dev, NULL, nouveau_vga_set_decode);
/* Initialise internal driver API hooks */
ret = nouveau_init_engine_ptrs(dev);
if (ret)
goto out;
engine = &dev_priv->engine;
dev_priv->init_state = NOUVEAU_CARD_INIT_FAILED;
/* Parse BIOS tables / Run init tables if card not POSTed */
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = nouveau_bios_init(dev);
if (ret)
goto out;
}
ret = nouveau_gpuobj_early_init(dev);
if (ret)
goto out_bios;
/* Initialise instance memory, must happen before mem_init so we
* know exactly how much VRAM we're able to use for "normal"
* purposes.
*/
ret = engine->instmem.init(dev);
if (ret)
goto out_gpuobj_early;
/* Setup the memory manager */
ret = nouveau_mem_init(dev);
if (ret)
goto out_instmem;
ret = nouveau_gpuobj_init(dev);
if (ret)
goto out_mem;
/* PMC */
ret = engine->mc.init(dev);
if (ret)
goto out_gpuobj;
/* PTIMER */
ret = engine->timer.init(dev);
if (ret)
goto out_mc;
/* PFB */
ret = engine->fb.init(dev);
if (ret)
goto out_timer;
/* PGRAPH */
ret = engine->graph.init(dev);
if (ret)
goto out_fb;
/* PFIFO */
ret = engine->fifo.init(dev);
if (ret)
goto out_graph;
/* this call irq_preinstall, register irq handler and
* call irq_postinstall
*/
ret = drm_irq_install(dev);
if (ret)
goto out_fifo;
ret = drm_vblank_init(dev, 0);
if (ret)
goto out_irq;
/* what about PVIDEO/PCRTC/PRAMDAC etc? */
if (!engine->graph.accel_blocked) {
ret = nouveau_card_init_channel(dev);
if (ret)
goto out_irq;
}
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
if (dev_priv->card_type >= NV_50) {
if (dev_priv->card_type >= NV_50)
ret = nv50_display_create(dev);
if (ret)
return ret;
} else {
else
ret = nv04_display_create(dev);
if (ret)
return ret;
}
if (ret)
goto out_irq;
}
ret = nouveau_backlight_init(dev);
@ -444,6 +457,32 @@ nouveau_card_init(struct drm_device *dev)
drm_helper_initial_config(dev);
return 0;
out_irq:
drm_irq_uninstall(dev);
out_fifo:
engine->fifo.takedown(dev);
out_graph:
engine->graph.takedown(dev);
out_fb:
engine->fb.takedown(dev);
out_timer:
engine->timer.takedown(dev);
out_mc:
engine->mc.takedown(dev);
out_gpuobj:
nouveau_gpuobj_takedown(dev);
out_mem:
nouveau_mem_close(dev);
out_instmem:
engine->instmem.takedown(dev);
out_gpuobj_early:
nouveau_gpuobj_late_takedown(dev);
out_bios:
nouveau_bios_takedown(dev);
out:
vga_client_register(dev->pdev, NULL, NULL, NULL);
return ret;
}
static void nouveau_card_takedown(struct drm_device *dev)

View File

@ -143,10 +143,10 @@ static void nv_crtc_calc_state_ext(struct drm_crtc *crtc, struct drm_display_mod
state->pllsel |= nv_crtc->index ? PLLSEL_VPLL2_MASK : PLLSEL_VPLL1_MASK;
if (pv->NM2)
NV_TRACE(dev, "vpll: n1 %d n2 %d m1 %d m2 %d log2p %d\n",
NV_DEBUG_KMS(dev, "vpll: n1 %d n2 %d m1 %d m2 %d log2p %d\n",
pv->N1, pv->N2, pv->M1, pv->M2, pv->log2P);
else
NV_TRACE(dev, "vpll: n %d m %d log2p %d\n",
NV_DEBUG_KMS(dev, "vpll: n %d m %d log2p %d\n",
pv->N1, pv->M1, pv->log2P);
nv_crtc->cursor.set_offset(nv_crtc, nv_crtc->cursor.offset);
@ -160,7 +160,7 @@ nv_crtc_dpms(struct drm_crtc *crtc, int mode)
unsigned char seq1 = 0, crtc17 = 0;
unsigned char crtc1A;
NV_TRACE(dev, "Setting dpms mode %d on CRTC %d\n", mode,
NV_DEBUG_KMS(dev, "Setting dpms mode %d on CRTC %d\n", mode,
nv_crtc->index);
if (nv_crtc->last_dpms == mode) /* Don't do unnecesary mode changes. */
@ -603,7 +603,7 @@ nv_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct drm_nouveau_private *dev_priv = dev->dev_private;
NV_DEBUG(dev, "CTRC mode on CRTC %d:\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "CTRC mode on CRTC %d:\n", nv_crtc->index);
drm_mode_debug_printmodeline(adjusted_mode);
/* unlock must come after turning off FP_TG_CONTROL in output_prepare */
@ -703,7 +703,7 @@ static void nv_crtc_destroy(struct drm_crtc *crtc)
{
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
NV_DEBUG(crtc->dev, "\n");
NV_DEBUG_KMS(crtc->dev, "\n");
if (!nv_crtc)
return;

View File

@ -205,7 +205,7 @@ static enum drm_connector_status nv04_dac_detect(struct drm_encoder *encoder,
NVWriteVgaSeq(dev, 0, NV_VIO_SR_CLOCK_INDEX, saved_seq1);
if (blue == 0x18) {
NV_TRACE(dev, "Load detected on head A\n");
NV_INFO(dev, "Load detected on head A\n");
return connector_status_connected;
}
@ -350,14 +350,10 @@ static void nv04_dac_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct drm_device *dev = encoder->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
int head = nouveau_crtc(encoder->crtc)->index;
NV_TRACE(dev, "%s called for encoder %d\n", __func__,
nv_encoder->dcb->index);
if (nv_gf4_disp_arch(dev)) {
struct drm_encoder *rebind;
uint32_t dac_offset = nv04_dac_output_offset(encoder);
@ -466,7 +462,7 @@ static void nv04_dac_destroy(struct drm_encoder *encoder)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
NV_DEBUG(encoder->dev, "\n");
NV_DEBUG_KMS(encoder->dev, "\n");
drm_encoder_cleanup(encoder);
kfree(nv_encoder);

View File

@ -261,7 +261,7 @@ static void nv04_dfp_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *output_mode = &nv_encoder->mode;
uint32_t mode_ratio, panel_ratio;
NV_DEBUG(dev, "Output mode on CRTC %d:\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "Output mode on CRTC %d:\n", nv_crtc->index);
drm_mode_debug_printmodeline(output_mode);
/* Initialize the FP registers in this CRTC. */
@ -413,7 +413,9 @@ static void nv04_dfp_commit(struct drm_encoder *encoder)
struct dcb_entry *dcbe = nv_encoder->dcb;
int head = nouveau_crtc(encoder->crtc)->index;
NV_TRACE(dev, "%s called for encoder %d\n", __func__, nv_encoder->dcb->index);
NV_INFO(dev, "Output %s is running on CRTC %d using output %c\n",
drm_get_connector_name(&nouveau_encoder_connector_get(nv_encoder)->base),
nv_crtc->index, '@' + ffs(nv_encoder->dcb->or));
if (dcbe->type == OUTPUT_TMDS)
run_tmds_table(dev, dcbe, head, nv_encoder->mode.clock);
@ -550,7 +552,7 @@ static void nv04_dfp_destroy(struct drm_encoder *encoder)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
NV_DEBUG(encoder->dev, "\n");
NV_DEBUG_KMS(encoder->dev, "\n");
drm_encoder_cleanup(encoder);
kfree(nv_encoder);

View File

@ -99,10 +99,11 @@ nv04_display_create(struct drm_device *dev)
uint16_t connector[16] = { 0 };
int i, ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
if (nv_two_heads(dev))
nv04_display_store_initial_head_owner(dev);
nouveau_hw_save_vga_fonts(dev, 1);
drm_mode_config_init(dev);
drm_mode_create_scaling_mode_property(dev);
@ -203,8 +204,6 @@ nv04_display_create(struct drm_device *dev)
/* Save previous state */
NVLockVgaCrtcs(dev, false);
nouveau_hw_save_vga_fonts(dev, 1);
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
crtc->funcs->save(crtc);
@ -223,7 +222,7 @@ nv04_display_destroy(struct drm_device *dev)
struct drm_encoder *encoder;
struct drm_crtc *crtc;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
/* Turn every CRTC off. */
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
@ -246,9 +245,9 @@ nv04_display_destroy(struct drm_device *dev)
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
crtc->funcs->restore(crtc);
nouveau_hw_save_vga_fonts(dev, 0);
drm_mode_config_cleanup(dev);
nouveau_hw_save_vga_fonts(dev, 0);
}
void

View File

@ -543,7 +543,7 @@ nv04_graph_mthd_set_operation(struct nouveau_channel *chan, int grclass,
nv_wi32(dev, instance, tmp);
nv_wr32(dev, NV04_PGRAPH_CTX_SWITCH1, tmp);
nv_wr32(dev, NV04_PGRAPH_CTX_CACHE1 + subc, tmp);
nv_wr32(dev, NV04_PGRAPH_CTX_CACHE1 + (subc<<2), tmp);
return 0;
}

View File

@ -389,49 +389,50 @@ struct graph_state {
int nv10[ARRAY_SIZE(nv10_graph_ctx_regs)];
int nv17[ARRAY_SIZE(nv17_graph_ctx_regs)];
struct pipe_state pipe_state;
uint32_t lma_window[4];
};
#define PIPE_SAVE(dev, state, addr) \
do { \
int __i; \
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, addr); \
for (__i = 0; __i < ARRAY_SIZE(state); __i++) \
state[__i] = nv_rd32(dev, NV10_PGRAPH_PIPE_DATA); \
} while (0)
#define PIPE_RESTORE(dev, state, addr) \
do { \
int __i; \
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, addr); \
for (__i = 0; __i < ARRAY_SIZE(state); __i++) \
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, state[__i]); \
} while (0)
static void nv10_graph_save_pipe(struct nouveau_channel *chan)
{
struct drm_device *dev = chan->dev;
struct graph_state *pgraph_ctx = chan->pgraph_ctx;
struct pipe_state *fifo_pipe_state = &pgraph_ctx->pipe_state;
int i;
#define PIPE_SAVE(addr) \
do { \
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, addr); \
for (i = 0; i < ARRAY_SIZE(fifo_pipe_state->pipe_##addr); i++) \
fifo_pipe_state->pipe_##addr[i] = nv_rd32(dev, NV10_PGRAPH_PIPE_DATA); \
} while (0)
struct pipe_state *pipe = &pgraph_ctx->pipe_state;
PIPE_SAVE(0x4400);
PIPE_SAVE(0x0200);
PIPE_SAVE(0x6400);
PIPE_SAVE(0x6800);
PIPE_SAVE(0x6c00);
PIPE_SAVE(0x7000);
PIPE_SAVE(0x7400);
PIPE_SAVE(0x7800);
PIPE_SAVE(0x0040);
PIPE_SAVE(0x0000);
#undef PIPE_SAVE
PIPE_SAVE(dev, pipe->pipe_0x4400, 0x4400);
PIPE_SAVE(dev, pipe->pipe_0x0200, 0x0200);
PIPE_SAVE(dev, pipe->pipe_0x6400, 0x6400);
PIPE_SAVE(dev, pipe->pipe_0x6800, 0x6800);
PIPE_SAVE(dev, pipe->pipe_0x6c00, 0x6c00);
PIPE_SAVE(dev, pipe->pipe_0x7000, 0x7000);
PIPE_SAVE(dev, pipe->pipe_0x7400, 0x7400);
PIPE_SAVE(dev, pipe->pipe_0x7800, 0x7800);
PIPE_SAVE(dev, pipe->pipe_0x0040, 0x0040);
PIPE_SAVE(dev, pipe->pipe_0x0000, 0x0000);
}
static void nv10_graph_load_pipe(struct nouveau_channel *chan)
{
struct drm_device *dev = chan->dev;
struct graph_state *pgraph_ctx = chan->pgraph_ctx;
struct pipe_state *fifo_pipe_state = &pgraph_ctx->pipe_state;
int i;
struct pipe_state *pipe = &pgraph_ctx->pipe_state;
uint32_t xfmode0, xfmode1;
#define PIPE_RESTORE(addr) \
do { \
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, addr); \
for (i = 0; i < ARRAY_SIZE(fifo_pipe_state->pipe_##addr); i++) \
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, fifo_pipe_state->pipe_##addr[i]); \
} while (0)
int i;
nouveau_wait_for_idle(dev);
/* XXX check haiku comments */
@ -457,24 +458,22 @@ static void nv10_graph_load_pipe(struct nouveau_channel *chan)
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x00000008);
PIPE_RESTORE(0x0200);
PIPE_RESTORE(dev, pipe->pipe_0x0200, 0x0200);
nouveau_wait_for_idle(dev);
/* restore XFMODE */
nv_wr32(dev, NV10_PGRAPH_XFMODE0, xfmode0);
nv_wr32(dev, NV10_PGRAPH_XFMODE1, xfmode1);
PIPE_RESTORE(0x6400);
PIPE_RESTORE(0x6800);
PIPE_RESTORE(0x6c00);
PIPE_RESTORE(0x7000);
PIPE_RESTORE(0x7400);
PIPE_RESTORE(0x7800);
PIPE_RESTORE(0x4400);
PIPE_RESTORE(0x0000);
PIPE_RESTORE(0x0040);
PIPE_RESTORE(dev, pipe->pipe_0x6400, 0x6400);
PIPE_RESTORE(dev, pipe->pipe_0x6800, 0x6800);
PIPE_RESTORE(dev, pipe->pipe_0x6c00, 0x6c00);
PIPE_RESTORE(dev, pipe->pipe_0x7000, 0x7000);
PIPE_RESTORE(dev, pipe->pipe_0x7400, 0x7400);
PIPE_RESTORE(dev, pipe->pipe_0x7800, 0x7800);
PIPE_RESTORE(dev, pipe->pipe_0x4400, 0x4400);
PIPE_RESTORE(dev, pipe->pipe_0x0000, 0x0000);
PIPE_RESTORE(dev, pipe->pipe_0x0040, 0x0040);
nouveau_wait_for_idle(dev);
#undef PIPE_RESTORE
}
static void nv10_graph_create_pipe(struct nouveau_channel *chan)
@ -832,6 +831,9 @@ int nv10_graph_init(struct drm_device *dev)
(1<<31));
if (dev_priv->chipset >= 0x17) {
nv_wr32(dev, NV10_PGRAPH_DEBUG_4, 0x1f000000);
nv_wr32(dev, 0x400a10, 0x3ff3fb6);
nv_wr32(dev, 0x400838, 0x2f8684);
nv_wr32(dev, 0x40083c, 0x115f3f);
nv_wr32(dev, 0x004006b0, 0x40000020);
} else
nv_wr32(dev, NV10_PGRAPH_DEBUG_4, 0x00000000);
@ -867,6 +869,115 @@ void nv10_graph_takedown(struct drm_device *dev)
{
}
static int
nv17_graph_mthd_lma_window(struct nouveau_channel *chan, int grclass,
int mthd, uint32_t data)
{
struct drm_device *dev = chan->dev;
struct graph_state *ctx = chan->pgraph_ctx;
struct pipe_state *pipe = &ctx->pipe_state;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
uint32_t pipe_0x0040[1], pipe_0x64c0[8], pipe_0x6a80[3], pipe_0x6ab0[3];
uint32_t xfmode0, xfmode1;
int i;
ctx->lma_window[(mthd - 0x1638) / 4] = data;
if (mthd != 0x1644)
return 0;
nouveau_wait_for_idle(dev);
PIPE_SAVE(dev, pipe_0x0040, 0x0040);
PIPE_SAVE(dev, pipe->pipe_0x0200, 0x0200);
PIPE_RESTORE(dev, ctx->lma_window, 0x6790);
nouveau_wait_for_idle(dev);
xfmode0 = nv_rd32(dev, NV10_PGRAPH_XFMODE0);
xfmode1 = nv_rd32(dev, NV10_PGRAPH_XFMODE1);
PIPE_SAVE(dev, pipe->pipe_0x4400, 0x4400);
PIPE_SAVE(dev, pipe_0x64c0, 0x64c0);
PIPE_SAVE(dev, pipe_0x6ab0, 0x6ab0);
PIPE_SAVE(dev, pipe_0x6a80, 0x6a80);
nouveau_wait_for_idle(dev);
nv_wr32(dev, NV10_PGRAPH_XFMODE0, 0x10000000);
nv_wr32(dev, NV10_PGRAPH_XFMODE1, 0x00000000);
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, 0x000064c0);
for (i = 0; i < 4; i++)
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x3f800000);
for (i = 0; i < 4; i++)
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x00000000);
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, 0x00006ab0);
for (i = 0; i < 3; i++)
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x3f800000);
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, 0x00006a80);
for (i = 0; i < 3; i++)
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x00000000);
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, 0x00000040);
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x00000008);
PIPE_RESTORE(dev, pipe->pipe_0x0200, 0x0200);
nouveau_wait_for_idle(dev);
PIPE_RESTORE(dev, pipe_0x0040, 0x0040);
nv_wr32(dev, NV10_PGRAPH_XFMODE0, xfmode0);
nv_wr32(dev, NV10_PGRAPH_XFMODE1, xfmode1);
PIPE_RESTORE(dev, pipe_0x64c0, 0x64c0);
PIPE_RESTORE(dev, pipe_0x6ab0, 0x6ab0);
PIPE_RESTORE(dev, pipe_0x6a80, 0x6a80);
PIPE_RESTORE(dev, pipe->pipe_0x4400, 0x4400);
nv_wr32(dev, NV10_PGRAPH_PIPE_ADDRESS, 0x000000c0);
nv_wr32(dev, NV10_PGRAPH_PIPE_DATA, 0x00000000);
nouveau_wait_for_idle(dev);
pgraph->fifo_access(dev, true);
return 0;
}
static int
nv17_graph_mthd_lma_enable(struct nouveau_channel *chan, int grclass,
int mthd, uint32_t data)
{
struct drm_device *dev = chan->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
nouveau_wait_for_idle(dev);
nv_wr32(dev, NV10_PGRAPH_DEBUG_4,
nv_rd32(dev, NV10_PGRAPH_DEBUG_4) | 0x1 << 8);
nv_wr32(dev, 0x004006b0,
nv_rd32(dev, 0x004006b0) | 0x8 << 24);
pgraph->fifo_access(dev, true);
return 0;
}
static struct nouveau_pgraph_object_method nv17_graph_celsius_mthds[] = {
{ 0x1638, nv17_graph_mthd_lma_window },
{ 0x163c, nv17_graph_mthd_lma_window },
{ 0x1640, nv17_graph_mthd_lma_window },
{ 0x1644, nv17_graph_mthd_lma_window },
{ 0x1658, nv17_graph_mthd_lma_enable },
{}
};
struct nouveau_pgraph_object_class nv10_graph_grclass[] = {
{ 0x0030, false, NULL }, /* null */
{ 0x0039, false, NULL }, /* m2mf */
@ -887,6 +998,6 @@ struct nouveau_pgraph_object_class nv10_graph_grclass[] = {
{ 0x0095, false, NULL }, /* multitex_tri */
{ 0x0056, false, NULL }, /* celcius (nv10) */
{ 0x0096, false, NULL }, /* celcius (nv11) */
{ 0x0099, false, NULL }, /* celcius (nv17) */
{ 0x0099, false, nv17_graph_celsius_mthds }, /* celcius (nv17) */
{}
};

View File

@ -219,7 +219,7 @@ static void nv17_tv_dpms(struct drm_encoder *encoder, int mode)
return;
nouveau_encoder(encoder)->last_dpms = mode;
NV_TRACE(dev, "Setting dpms mode %d on TV encoder (output %d)\n",
NV_INFO(dev, "Setting dpms mode %d on TV encoder (output %d)\n",
mode, nouveau_encoder(encoder)->dcb->index);
regs->ptv_200 &= ~1;
@ -619,7 +619,7 @@ static void nv17_tv_destroy(struct drm_encoder *encoder)
{
struct nv17_tv_encoder *tv_enc = to_tv_enc(encoder);
NV_DEBUG(encoder->dev, "\n");
NV_DEBUG_KMS(encoder->dev, "\n");
drm_encoder_cleanup(encoder);
kfree(tv_enc);

View File

@ -24,36 +24,10 @@
*
*/
#include <linux/firmware.h>
#include "drmP.h"
#include "drm.h"
#include "nouveau_drv.h"
MODULE_FIRMWARE("nouveau/nv40.ctxprog");
MODULE_FIRMWARE("nouveau/nv40.ctxvals");
MODULE_FIRMWARE("nouveau/nv41.ctxprog");
MODULE_FIRMWARE("nouveau/nv41.ctxvals");
MODULE_FIRMWARE("nouveau/nv42.ctxprog");
MODULE_FIRMWARE("nouveau/nv42.ctxvals");
MODULE_FIRMWARE("nouveau/nv43.ctxprog");
MODULE_FIRMWARE("nouveau/nv43.ctxvals");
MODULE_FIRMWARE("nouveau/nv44.ctxprog");
MODULE_FIRMWARE("nouveau/nv44.ctxvals");
MODULE_FIRMWARE("nouveau/nv46.ctxprog");
MODULE_FIRMWARE("nouveau/nv46.ctxvals");
MODULE_FIRMWARE("nouveau/nv47.ctxprog");
MODULE_FIRMWARE("nouveau/nv47.ctxvals");
MODULE_FIRMWARE("nouveau/nv49.ctxprog");
MODULE_FIRMWARE("nouveau/nv49.ctxvals");
MODULE_FIRMWARE("nouveau/nv4a.ctxprog");
MODULE_FIRMWARE("nouveau/nv4a.ctxvals");
MODULE_FIRMWARE("nouveau/nv4b.ctxprog");
MODULE_FIRMWARE("nouveau/nv4b.ctxvals");
MODULE_FIRMWARE("nouveau/nv4c.ctxprog");
MODULE_FIRMWARE("nouveau/nv4c.ctxvals");
MODULE_FIRMWARE("nouveau/nv4e.ctxprog");
MODULE_FIRMWARE("nouveau/nv4e.ctxvals");
#include "nouveau_grctx.h"
struct nouveau_channel *
nv40_graph_channel(struct drm_device *dev)
@ -83,27 +57,30 @@ nv40_graph_create_context(struct nouveau_channel *chan)
{
struct drm_device *dev = chan->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_gpuobj *ctx;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
int ret;
/* Allocate a 175KiB block of PRAMIN to store the context. This
* is massive overkill for a lot of chipsets, but it should be safe
* until we're able to implement this properly (will happen at more
* or less the same time we're able to write our own context programs.
*/
ret = nouveau_gpuobj_new_ref(dev, chan, NULL, 0, 175*1024, 16,
NVOBJ_FLAG_ZERO_ALLOC,
&chan->ramin_grctx);
ret = nouveau_gpuobj_new_ref(dev, chan, NULL, 0, pgraph->grctx_size,
16, NVOBJ_FLAG_ZERO_ALLOC,
&chan->ramin_grctx);
if (ret)
return ret;
ctx = chan->ramin_grctx->gpuobj;
/* Initialise default context values */
dev_priv->engine.instmem.prepare_access(dev, true);
nv40_grctx_vals_load(dev, ctx);
nv_wo32(dev, ctx, 0, ctx->im_pramin->start);
dev_priv->engine.instmem.finish_access(dev);
if (!pgraph->ctxprog) {
struct nouveau_grctx ctx = {};
ctx.dev = chan->dev;
ctx.mode = NOUVEAU_GRCTX_VALS;
ctx.data = chan->ramin_grctx->gpuobj;
nv40_grctx_init(&ctx);
} else {
nouveau_grctx_vals_load(dev, chan->ramin_grctx->gpuobj);
}
nv_wo32(dev, chan->ramin_grctx->gpuobj, 0,
chan->ramin_grctx->gpuobj->im_pramin->start);
dev_priv->engine.instmem.finish_access(dev);
return 0;
}
@ -204,139 +181,6 @@ nv40_graph_unload_context(struct drm_device *dev)
return ret;
}
struct nouveau_ctxprog {
uint32_t signature;
uint8_t version;
uint16_t length;
uint32_t data[];
} __attribute__ ((packed));
struct nouveau_ctxvals {
uint32_t signature;
uint8_t version;
uint32_t length;
struct {
uint32_t offset;
uint32_t value;
} data[];
} __attribute__ ((packed));
int
nv40_grctx_init(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
const int chipset = dev_priv->chipset;
const struct firmware *fw;
const struct nouveau_ctxprog *cp;
const struct nouveau_ctxvals *cv;
char name[32];
int ret, i;
pgraph->accel_blocked = true;
if (!pgraph->ctxprog) {
sprintf(name, "nouveau/nv%02x.ctxprog", chipset);
ret = request_firmware(&fw, name, &dev->pdev->dev);
if (ret) {
NV_ERROR(dev, "No ctxprog for NV%02x\n", chipset);
return ret;
}
pgraph->ctxprog = kmalloc(fw->size, GFP_KERNEL);
if (!pgraph->ctxprog) {
NV_ERROR(dev, "OOM copying ctxprog\n");
release_firmware(fw);
return -ENOMEM;
}
memcpy(pgraph->ctxprog, fw->data, fw->size);
cp = pgraph->ctxprog;
if (le32_to_cpu(cp->signature) != 0x5043564e ||
cp->version != 0 ||
le16_to_cpu(cp->length) != ((fw->size - 7) / 4)) {
NV_ERROR(dev, "ctxprog invalid\n");
release_firmware(fw);
nv40_grctx_fini(dev);
return -EINVAL;
}
release_firmware(fw);
}
if (!pgraph->ctxvals) {
sprintf(name, "nouveau/nv%02x.ctxvals", chipset);
ret = request_firmware(&fw, name, &dev->pdev->dev);
if (ret) {
NV_ERROR(dev, "No ctxvals for NV%02x\n", chipset);
nv40_grctx_fini(dev);
return ret;
}
pgraph->ctxvals = kmalloc(fw->size, GFP_KERNEL);
if (!pgraph->ctxprog) {
NV_ERROR(dev, "OOM copying ctxprog\n");
release_firmware(fw);
nv40_grctx_fini(dev);
return -ENOMEM;
}
memcpy(pgraph->ctxvals, fw->data, fw->size);
cv = (void *)pgraph->ctxvals;
if (le32_to_cpu(cv->signature) != 0x5643564e ||
cv->version != 0 ||
le32_to_cpu(cv->length) != ((fw->size - 9) / 8)) {
NV_ERROR(dev, "ctxvals invalid\n");
release_firmware(fw);
nv40_grctx_fini(dev);
return -EINVAL;
}
release_firmware(fw);
}
cp = pgraph->ctxprog;
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_INDEX, 0);
for (i = 0; i < le16_to_cpu(cp->length); i++)
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_DATA,
le32_to_cpu(cp->data[i]));
pgraph->accel_blocked = false;
return 0;
}
void
nv40_grctx_fini(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
if (pgraph->ctxprog) {
kfree(pgraph->ctxprog);
pgraph->ctxprog = NULL;
}
if (pgraph->ctxvals) {
kfree(pgraph->ctxprog);
pgraph->ctxvals = NULL;
}
}
void
nv40_grctx_vals_load(struct drm_device *dev, struct nouveau_gpuobj *ctx)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
struct nouveau_ctxvals *cv = pgraph->ctxvals;
int i;
if (!cv)
return;
for (i = 0; i < le32_to_cpu(cv->length); i++)
nv_wo32(dev, ctx, le32_to_cpu(cv->data[i].offset),
le32_to_cpu(cv->data[i].value));
}
/*
* G70 0x47
* G71 0x49
@ -359,7 +203,26 @@ nv40_graph_init(struct drm_device *dev)
nv_wr32(dev, NV03_PMC_ENABLE, nv_rd32(dev, NV03_PMC_ENABLE) |
NV_PMC_ENABLE_PGRAPH);
nv40_grctx_init(dev);
if (nouveau_ctxfw) {
nouveau_grctx_prog_load(dev);
dev_priv->engine.graph.grctx_size = 175 * 1024;
}
if (!dev_priv->engine.graph.ctxprog) {
struct nouveau_grctx ctx = {};
uint32_t cp[256];
ctx.dev = dev;
ctx.mode = NOUVEAU_GRCTX_PROG;
ctx.data = cp;
ctx.ctxprog_max = 256;
nv40_grctx_init(&ctx);
dev_priv->engine.graph.grctx_size = ctx.ctxvals_pos * 4;
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_INDEX, 0);
for (i = 0; i < ctx.ctxprog_len; i++)
nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_DATA, cp[i]);
}
/* No context present currently */
nv_wr32(dev, NV40_PGRAPH_CTXCTL_CUR, 0x00000000);
@ -539,6 +402,7 @@ nv40_graph_init(struct drm_device *dev)
void nv40_graph_takedown(struct drm_device *dev)
{
nouveau_grctx_fini(dev);
}
struct nouveau_pgraph_object_class nv40_graph_grclass[] = {

View File

@ -0,0 +1,678 @@
/*
* Copyright 2009 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
/* NVIDIA context programs handle a number of other conditions which are
* not implemented in our versions. It's not clear why NVIDIA context
* programs have this code, nor whether it's strictly necessary for
* correct operation. We'll implement additional handling if/when we
* discover it's necessary.
*
* - On context save, NVIDIA set 0x400314 bit 0 to 1 if the "3D state"
* flag is set, this gets saved into the context.
* - On context save, the context program for all cards load nsource
* into a flag register and check for ILLEGAL_MTHD. If it's set,
* opcode 0x60000d is called before resuming normal operation.
* - Some context programs check more conditions than the above. NV44
* checks: ((nsource & 0x0857) || (0x400718 & 0x0100) || (intr & 0x0001))
* and calls 0x60000d before resuming normal operation.
* - At the very beginning of NVIDIA's context programs, flag 9 is checked
* and if true 0x800001 is called with count=0, pos=0, the flag is cleared
* and then the ctxprog is aborted. It looks like a complicated NOP,
* its purpose is unknown.
* - In the section of code that loads the per-vs state, NVIDIA check
* flag 10. If it's set, they only transfer the small 0x300 byte block
* of state + the state for a single vs as opposed to the state for
* all vs units. It doesn't seem likely that it'll occur in normal
* operation, especially seeing as it appears NVIDIA may have screwed
* up the ctxprogs for some cards and have an invalid instruction
* rather than a cp_lsr(ctx, dwords_for_1_vs_unit) instruction.
* - There's a number of places where context offset 0 (where we place
* the PRAMIN offset of the context) is loaded into either 0x408000,
* 0x408004 or 0x408008. Not sure what's up there either.
* - The ctxprogs for some cards save 0x400a00 again during the cleanup
* path for auto-loadctx.
*/
#define CP_FLAG_CLEAR 0
#define CP_FLAG_SET 1
#define CP_FLAG_SWAP_DIRECTION ((0 * 32) + 0)
#define CP_FLAG_SWAP_DIRECTION_LOAD 0
#define CP_FLAG_SWAP_DIRECTION_SAVE 1
#define CP_FLAG_USER_SAVE ((0 * 32) + 5)
#define CP_FLAG_USER_SAVE_NOT_PENDING 0
#define CP_FLAG_USER_SAVE_PENDING 1
#define CP_FLAG_USER_LOAD ((0 * 32) + 6)
#define CP_FLAG_USER_LOAD_NOT_PENDING 0
#define CP_FLAG_USER_LOAD_PENDING 1
#define CP_FLAG_STATUS ((3 * 32) + 0)
#define CP_FLAG_STATUS_IDLE 0
#define CP_FLAG_STATUS_BUSY 1
#define CP_FLAG_AUTO_SAVE ((3 * 32) + 4)
#define CP_FLAG_AUTO_SAVE_NOT_PENDING 0
#define CP_FLAG_AUTO_SAVE_PENDING 1
#define CP_FLAG_AUTO_LOAD ((3 * 32) + 5)
#define CP_FLAG_AUTO_LOAD_NOT_PENDING 0
#define CP_FLAG_AUTO_LOAD_PENDING 1
#define CP_FLAG_UNK54 ((3 * 32) + 6)
#define CP_FLAG_UNK54_CLEAR 0
#define CP_FLAG_UNK54_SET 1
#define CP_FLAG_ALWAYS ((3 * 32) + 8)
#define CP_FLAG_ALWAYS_FALSE 0
#define CP_FLAG_ALWAYS_TRUE 1
#define CP_FLAG_UNK57 ((3 * 32) + 9)
#define CP_FLAG_UNK57_CLEAR 0
#define CP_FLAG_UNK57_SET 1
#define CP_CTX 0x00100000
#define CP_CTX_COUNT 0x000fc000
#define CP_CTX_COUNT_SHIFT 14
#define CP_CTX_REG 0x00003fff
#define CP_LOAD_SR 0x00200000
#define CP_LOAD_SR_VALUE 0x000fffff
#define CP_BRA 0x00400000
#define CP_BRA_IP 0x0000ff00
#define CP_BRA_IP_SHIFT 8
#define CP_BRA_IF_CLEAR 0x00000080
#define CP_BRA_FLAG 0x0000007f
#define CP_WAIT 0x00500000
#define CP_WAIT_SET 0x00000080
#define CP_WAIT_FLAG 0x0000007f
#define CP_SET 0x00700000
#define CP_SET_1 0x00000080
#define CP_SET_FLAG 0x0000007f
#define CP_NEXT_TO_SWAP 0x00600007
#define CP_NEXT_TO_CURRENT 0x00600009
#define CP_SET_CONTEXT_POINTER 0x0060000a
#define CP_END 0x0060000e
#define CP_LOAD_MAGIC_UNK01 0x00800001 /* unknown */
#define CP_LOAD_MAGIC_NV44TCL 0x00800029 /* per-vs state (0x4497) */
#define CP_LOAD_MAGIC_NV40TCL 0x00800041 /* per-vs state (0x4097) */
#include "drmP.h"
#include "nouveau_drv.h"
#include "nouveau_grctx.h"
/* TODO:
* - get vs count from 0x1540
* - document unimplemented bits compared to nvidia
* - nsource handling
* - R0 & 0x0200 handling
* - single-vs handling
* - 400314 bit 0
*/
static int
nv40_graph_4097(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
if ((dev_priv->chipset & 0xf0) == 0x60)
return 0;
return !!(0x0baf & (1 << dev_priv->chipset));
}
static int
nv40_graph_vs_count(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
switch (dev_priv->chipset) {
case 0x47:
case 0x49:
case 0x4b:
return 8;
case 0x40:
return 6;
case 0x41:
case 0x42:
return 5;
case 0x43:
case 0x44:
case 0x46:
case 0x4a:
return 3;
case 0x4c:
case 0x4e:
case 0x67:
default:
return 1;
}
}
enum cp_label {
cp_check_load = 1,
cp_setup_auto_load,
cp_setup_load,
cp_setup_save,
cp_swap_state,
cp_swap_state3d_3_is_save,
cp_prepare_exit,
cp_exit,
};
static void
nv40_graph_construct_general(struct nouveau_grctx *ctx)
{
struct drm_nouveau_private *dev_priv = ctx->dev->dev_private;
int i;
cp_ctx(ctx, 0x4000a4, 1);
gr_def(ctx, 0x4000a4, 0x00000008);
cp_ctx(ctx, 0x400144, 58);
gr_def(ctx, 0x400144, 0x00000001);
cp_ctx(ctx, 0x400314, 1);
gr_def(ctx, 0x400314, 0x00000000);
cp_ctx(ctx, 0x400400, 10);
cp_ctx(ctx, 0x400480, 10);
cp_ctx(ctx, 0x400500, 19);
gr_def(ctx, 0x400514, 0x00040000);
gr_def(ctx, 0x400524, 0x55555555);
gr_def(ctx, 0x400528, 0x55555555);
gr_def(ctx, 0x40052c, 0x55555555);
gr_def(ctx, 0x400530, 0x55555555);
cp_ctx(ctx, 0x400560, 6);
gr_def(ctx, 0x400568, 0x0000ffff);
gr_def(ctx, 0x40056c, 0x0000ffff);
cp_ctx(ctx, 0x40057c, 5);
cp_ctx(ctx, 0x400710, 3);
gr_def(ctx, 0x400710, 0x20010001);
gr_def(ctx, 0x400714, 0x0f73ef00);
cp_ctx(ctx, 0x400724, 1);
gr_def(ctx, 0x400724, 0x02008821);
cp_ctx(ctx, 0x400770, 3);
if (dev_priv->chipset == 0x40) {
cp_ctx(ctx, 0x400814, 4);
cp_ctx(ctx, 0x400828, 5);
cp_ctx(ctx, 0x400840, 5);
gr_def(ctx, 0x400850, 0x00000040);
cp_ctx(ctx, 0x400858, 4);
gr_def(ctx, 0x400858, 0x00000040);
gr_def(ctx, 0x40085c, 0x00000040);
gr_def(ctx, 0x400864, 0x80000000);
cp_ctx(ctx, 0x40086c, 9);
gr_def(ctx, 0x40086c, 0x80000000);
gr_def(ctx, 0x400870, 0x80000000);
gr_def(ctx, 0x400874, 0x80000000);
gr_def(ctx, 0x400878, 0x80000000);
gr_def(ctx, 0x400888, 0x00000040);
gr_def(ctx, 0x40088c, 0x80000000);
cp_ctx(ctx, 0x4009c0, 8);
gr_def(ctx, 0x4009cc, 0x80000000);
gr_def(ctx, 0x4009dc, 0x80000000);
} else {
cp_ctx(ctx, 0x400840, 20);
if (!nv40_graph_4097(ctx->dev)) {
for (i = 0; i < 8; i++)
gr_def(ctx, 0x400860 + (i * 4), 0x00000001);
}
gr_def(ctx, 0x400880, 0x00000040);
gr_def(ctx, 0x400884, 0x00000040);
gr_def(ctx, 0x400888, 0x00000040);
cp_ctx(ctx, 0x400894, 11);
gr_def(ctx, 0x400894, 0x00000040);
if (nv40_graph_4097(ctx->dev)) {
for (i = 0; i < 8; i++)
gr_def(ctx, 0x4008a0 + (i * 4), 0x80000000);
}
cp_ctx(ctx, 0x4008e0, 2);
cp_ctx(ctx, 0x4008f8, 2);
if (dev_priv->chipset == 0x4c ||
(dev_priv->chipset & 0xf0) == 0x60)
cp_ctx(ctx, 0x4009f8, 1);
}
cp_ctx(ctx, 0x400a00, 73);
gr_def(ctx, 0x400b0c, 0x0b0b0b0c);
cp_ctx(ctx, 0x401000, 4);
cp_ctx(ctx, 0x405004, 1);
switch (dev_priv->chipset) {
case 0x47:
case 0x49:
case 0x4b:
cp_ctx(ctx, 0x403448, 1);
gr_def(ctx, 0x403448, 0x00001010);
break;
default:
cp_ctx(ctx, 0x403440, 1);
switch (dev_priv->chipset) {
case 0x40:
gr_def(ctx, 0x403440, 0x00000010);
break;
case 0x44:
case 0x46:
case 0x4a:
gr_def(ctx, 0x403440, 0x00003010);
break;
case 0x41:
case 0x42:
case 0x43:
case 0x4c:
case 0x4e:
case 0x67:
default:
gr_def(ctx, 0x403440, 0x00001010);
break;
}
break;
}
}
static void
nv40_graph_construct_state3d(struct nouveau_grctx *ctx)
{
struct drm_nouveau_private *dev_priv = ctx->dev->dev_private;
int i;
if (dev_priv->chipset == 0x40) {
cp_ctx(ctx, 0x401880, 51);
gr_def(ctx, 0x401940, 0x00000100);
} else
if (dev_priv->chipset == 0x46 || dev_priv->chipset == 0x47 ||
dev_priv->chipset == 0x49 || dev_priv->chipset == 0x4b) {
cp_ctx(ctx, 0x401880, 32);
for (i = 0; i < 16; i++)
gr_def(ctx, 0x401880 + (i * 4), 0x00000111);
if (dev_priv->chipset == 0x46)
cp_ctx(ctx, 0x401900, 16);
cp_ctx(ctx, 0x401940, 3);
}
cp_ctx(ctx, 0x40194c, 18);
gr_def(ctx, 0x401954, 0x00000111);
gr_def(ctx, 0x401958, 0x00080060);
gr_def(ctx, 0x401974, 0x00000080);
gr_def(ctx, 0x401978, 0xffff0000);
gr_def(ctx, 0x40197c, 0x00000001);
gr_def(ctx, 0x401990, 0x46400000);
if (dev_priv->chipset == 0x40) {
cp_ctx(ctx, 0x4019a0, 2);
cp_ctx(ctx, 0x4019ac, 5);
} else {
cp_ctx(ctx, 0x4019a0, 1);
cp_ctx(ctx, 0x4019b4, 3);
}
gr_def(ctx, 0x4019bc, 0xffff0000);
switch (dev_priv->chipset) {
case 0x46:
case 0x47:
case 0x49:
case 0x4b:
cp_ctx(ctx, 0x4019c0, 18);
for (i = 0; i < 16; i++)
gr_def(ctx, 0x4019c0 + (i * 4), 0x88888888);
break;
}
cp_ctx(ctx, 0x401a08, 8);
gr_def(ctx, 0x401a10, 0x0fff0000);
gr_def(ctx, 0x401a14, 0x0fff0000);
gr_def(ctx, 0x401a1c, 0x00011100);
cp_ctx(ctx, 0x401a2c, 4);
cp_ctx(ctx, 0x401a44, 26);
for (i = 0; i < 16; i++)
gr_def(ctx, 0x401a44 + (i * 4), 0x07ff0000);
gr_def(ctx, 0x401a8c, 0x4b7fffff);
if (dev_priv->chipset == 0x40) {
cp_ctx(ctx, 0x401ab8, 3);
} else {
cp_ctx(ctx, 0x401ab8, 1);
cp_ctx(ctx, 0x401ac0, 1);
}
cp_ctx(ctx, 0x401ad0, 8);
gr_def(ctx, 0x401ad0, 0x30201000);
gr_def(ctx, 0x401ad4, 0x70605040);
gr_def(ctx, 0x401ad8, 0xb8a89888);
gr_def(ctx, 0x401adc, 0xf8e8d8c8);
cp_ctx(ctx, 0x401b10, dev_priv->chipset == 0x40 ? 2 : 1);
gr_def(ctx, 0x401b10, 0x40100000);
cp_ctx(ctx, 0x401b18, dev_priv->chipset == 0x40 ? 6 : 5);
gr_def(ctx, 0x401b28, dev_priv->chipset == 0x40 ?
0x00000004 : 0x00000000);
cp_ctx(ctx, 0x401b30, 25);
gr_def(ctx, 0x401b34, 0x0000ffff);
gr_def(ctx, 0x401b68, 0x435185d6);
gr_def(ctx, 0x401b6c, 0x2155b699);
gr_def(ctx, 0x401b70, 0xfedcba98);
gr_def(ctx, 0x401b74, 0x00000098);
gr_def(ctx, 0x401b84, 0xffffffff);
gr_def(ctx, 0x401b88, 0x00ff7000);
gr_def(ctx, 0x401b8c, 0x0000ffff);
if (dev_priv->chipset != 0x44 && dev_priv->chipset != 0x4a &&
dev_priv->chipset != 0x4e)
cp_ctx(ctx, 0x401b94, 1);
cp_ctx(ctx, 0x401b98, 8);
gr_def(ctx, 0x401b9c, 0x00ff0000);
cp_ctx(ctx, 0x401bc0, 9);
gr_def(ctx, 0x401be0, 0x00ffff00);
cp_ctx(ctx, 0x401c00, 192);
for (i = 0; i < 16; i++) { /* fragment texture units */
gr_def(ctx, 0x401c40 + (i * 4), 0x00018488);
gr_def(ctx, 0x401c80 + (i * 4), 0x00028202);
gr_def(ctx, 0x401d00 + (i * 4), 0x0000aae4);
gr_def(ctx, 0x401d40 + (i * 4), 0x01012000);
gr_def(ctx, 0x401d80 + (i * 4), 0x00080008);
gr_def(ctx, 0x401e00 + (i * 4), 0x00100008);
}
for (i = 0; i < 4; i++) { /* vertex texture units */
gr_def(ctx, 0x401e90 + (i * 4), 0x0001bc80);
gr_def(ctx, 0x401ea0 + (i * 4), 0x00000202);
gr_def(ctx, 0x401ec0 + (i * 4), 0x00000008);
gr_def(ctx, 0x401ee0 + (i * 4), 0x00080008);
}
cp_ctx(ctx, 0x400f5c, 3);
gr_def(ctx, 0x400f5c, 0x00000002);
cp_ctx(ctx, 0x400f84, 1);
}
static void
nv40_graph_construct_state3d_2(struct nouveau_grctx *ctx)
{
struct drm_nouveau_private *dev_priv = ctx->dev->dev_private;
int i;
cp_ctx(ctx, 0x402000, 1);
cp_ctx(ctx, 0x402404, dev_priv->chipset == 0x40 ? 1 : 2);
switch (dev_priv->chipset) {
case 0x40:
gr_def(ctx, 0x402404, 0x00000001);
break;
case 0x4c:
case 0x4e:
case 0x67:
gr_def(ctx, 0x402404, 0x00000020);
break;
case 0x46:
case 0x49:
case 0x4b:
gr_def(ctx, 0x402404, 0x00000421);
break;
default:
gr_def(ctx, 0x402404, 0x00000021);
}
if (dev_priv->chipset != 0x40)
gr_def(ctx, 0x402408, 0x030c30c3);
switch (dev_priv->chipset) {
case 0x44:
case 0x46:
case 0x4a:
case 0x4c:
case 0x4e:
case 0x67:
cp_ctx(ctx, 0x402440, 1);
gr_def(ctx, 0x402440, 0x00011001);
break;
default:
break;
}
cp_ctx(ctx, 0x402480, dev_priv->chipset == 0x40 ? 8 : 9);
gr_def(ctx, 0x402488, 0x3e020200);
gr_def(ctx, 0x40248c, 0x00ffffff);
switch (dev_priv->chipset) {
case 0x40:
gr_def(ctx, 0x402490, 0x60103f00);
break;
case 0x47:
gr_def(ctx, 0x402490, 0x40103f00);
break;
case 0x41:
case 0x42:
case 0x49:
case 0x4b:
gr_def(ctx, 0x402490, 0x20103f00);
break;
default:
gr_def(ctx, 0x402490, 0x0c103f00);
break;
}
gr_def(ctx, 0x40249c, dev_priv->chipset <= 0x43 ?
0x00020000 : 0x00040000);
cp_ctx(ctx, 0x402500, 31);
gr_def(ctx, 0x402530, 0x00008100);
if (dev_priv->chipset == 0x40)
cp_ctx(ctx, 0x40257c, 6);
cp_ctx(ctx, 0x402594, 16);
cp_ctx(ctx, 0x402800, 17);
gr_def(ctx, 0x402800, 0x00000001);
switch (dev_priv->chipset) {
case 0x47:
case 0x49:
case 0x4b:
cp_ctx(ctx, 0x402864, 1);
gr_def(ctx, 0x402864, 0x00001001);
cp_ctx(ctx, 0x402870, 3);
gr_def(ctx, 0x402878, 0x00000003);
if (dev_priv->chipset != 0x47) { /* belong at end!! */
cp_ctx(ctx, 0x402900, 1);
cp_ctx(ctx, 0x402940, 1);
cp_ctx(ctx, 0x402980, 1);
cp_ctx(ctx, 0x4029c0, 1);
cp_ctx(ctx, 0x402a00, 1);
cp_ctx(ctx, 0x402a40, 1);
cp_ctx(ctx, 0x402a80, 1);
cp_ctx(ctx, 0x402ac0, 1);
}
break;
case 0x40:
cp_ctx(ctx, 0x402844, 1);
gr_def(ctx, 0x402844, 0x00000001);
cp_ctx(ctx, 0x402850, 1);
break;
default:
cp_ctx(ctx, 0x402844, 1);
gr_def(ctx, 0x402844, 0x00001001);
cp_ctx(ctx, 0x402850, 2);
gr_def(ctx, 0x402854, 0x00000003);
break;
}
cp_ctx(ctx, 0x402c00, 4);
gr_def(ctx, 0x402c00, dev_priv->chipset == 0x40 ?
0x80800001 : 0x00888001);
switch (dev_priv->chipset) {
case 0x47:
case 0x49:
case 0x4b:
cp_ctx(ctx, 0x402c20, 40);
for (i = 0; i < 32; i++)
gr_def(ctx, 0x402c40 + (i * 4), 0xffffffff);
cp_ctx(ctx, 0x4030b8, 13);
gr_def(ctx, 0x4030dc, 0x00000005);
gr_def(ctx, 0x4030e8, 0x0000ffff);
break;
default:
cp_ctx(ctx, 0x402c10, 4);
if (dev_priv->chipset == 0x40)
cp_ctx(ctx, 0x402c20, 36);
else
if (dev_priv->chipset <= 0x42)
cp_ctx(ctx, 0x402c20, 24);
else
if (dev_priv->chipset <= 0x4a)
cp_ctx(ctx, 0x402c20, 16);
else
cp_ctx(ctx, 0x402c20, 8);
cp_ctx(ctx, 0x402cb0, dev_priv->chipset == 0x40 ? 12 : 13);
gr_def(ctx, 0x402cd4, 0x00000005);
if (dev_priv->chipset != 0x40)
gr_def(ctx, 0x402ce0, 0x0000ffff);
break;
}
cp_ctx(ctx, 0x403400, dev_priv->chipset == 0x40 ? 4 : 3);
cp_ctx(ctx, 0x403410, dev_priv->chipset == 0x40 ? 4 : 3);
cp_ctx(ctx, 0x403420, nv40_graph_vs_count(ctx->dev));
for (i = 0; i < nv40_graph_vs_count(ctx->dev); i++)
gr_def(ctx, 0x403420 + (i * 4), 0x00005555);
if (dev_priv->chipset != 0x40) {
cp_ctx(ctx, 0x403600, 1);
gr_def(ctx, 0x403600, 0x00000001);
}
cp_ctx(ctx, 0x403800, 1);
cp_ctx(ctx, 0x403c18, 1);
gr_def(ctx, 0x403c18, 0x00000001);
switch (dev_priv->chipset) {
case 0x46:
case 0x47:
case 0x49:
case 0x4b:
cp_ctx(ctx, 0x405018, 1);
gr_def(ctx, 0x405018, 0x08e00001);
cp_ctx(ctx, 0x405c24, 1);
gr_def(ctx, 0x405c24, 0x000e3000);
break;
}
if (dev_priv->chipset != 0x4e)
cp_ctx(ctx, 0x405800, 11);
cp_ctx(ctx, 0x407000, 1);
}
static void
nv40_graph_construct_state3d_3(struct nouveau_grctx *ctx)
{
int len = nv40_graph_4097(ctx->dev) ? 0x0684 : 0x0084;
cp_out (ctx, 0x300000);
cp_lsr (ctx, len - 4);
cp_bra (ctx, SWAP_DIRECTION, SAVE, cp_swap_state3d_3_is_save);
cp_lsr (ctx, len);
cp_name(ctx, cp_swap_state3d_3_is_save);
cp_out (ctx, 0x800001);
ctx->ctxvals_pos += len;
}
static void
nv40_graph_construct_shader(struct nouveau_grctx *ctx)
{
struct drm_device *dev = ctx->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_gpuobj *obj = ctx->data;
int vs, vs_nr, vs_len, vs_nr_b0, vs_nr_b1, b0_offset, b1_offset;
int offset, i;
vs_nr = nv40_graph_vs_count(ctx->dev);
vs_nr_b0 = 363;
vs_nr_b1 = dev_priv->chipset == 0x40 ? 128 : 64;
if (dev_priv->chipset == 0x40) {
b0_offset = 0x2200/4; /* 33a0 */
b1_offset = 0x55a0/4; /* 1500 */
vs_len = 0x6aa0/4;
} else
if (dev_priv->chipset == 0x41 || dev_priv->chipset == 0x42) {
b0_offset = 0x2200/4; /* 2200 */
b1_offset = 0x4400/4; /* 0b00 */
vs_len = 0x4f00/4;
} else {
b0_offset = 0x1d40/4; /* 2200 */
b1_offset = 0x3f40/4; /* 0b00 : 0a40 */
vs_len = nv40_graph_4097(dev) ? 0x4a40/4 : 0x4980/4;
}
cp_lsr(ctx, vs_len * vs_nr + 0x300/4);
cp_out(ctx, nv40_graph_4097(dev) ? 0x800041 : 0x800029);
offset = ctx->ctxvals_pos;
ctx->ctxvals_pos += (0x0300/4 + (vs_nr * vs_len));
if (ctx->mode != NOUVEAU_GRCTX_VALS)
return;
offset += 0x0280/4;
for (i = 0; i < 16; i++, offset += 2)
nv_wo32(dev, obj, offset, 0x3f800000);
for (vs = 0; vs < vs_nr; vs++, offset += vs_len) {
for (i = 0; i < vs_nr_b0 * 6; i += 6)
nv_wo32(dev, obj, offset + b0_offset + i, 0x00000001);
for (i = 0; i < vs_nr_b1 * 4; i += 4)
nv_wo32(dev, obj, offset + b1_offset + i, 0x3f800000);
}
}
void
nv40_grctx_init(struct nouveau_grctx *ctx)
{
/* decide whether we're loading/unloading the context */
cp_bra (ctx, AUTO_SAVE, PENDING, cp_setup_save);
cp_bra (ctx, USER_SAVE, PENDING, cp_setup_save);
cp_name(ctx, cp_check_load);
cp_bra (ctx, AUTO_LOAD, PENDING, cp_setup_auto_load);
cp_bra (ctx, USER_LOAD, PENDING, cp_setup_load);
cp_bra (ctx, ALWAYS, TRUE, cp_exit);
/* setup for context load */
cp_name(ctx, cp_setup_auto_load);
cp_wait(ctx, STATUS, IDLE);
cp_out (ctx, CP_NEXT_TO_SWAP);
cp_name(ctx, cp_setup_load);
cp_wait(ctx, STATUS, IDLE);
cp_set (ctx, SWAP_DIRECTION, LOAD);
cp_out (ctx, 0x00910880); /* ?? */
cp_out (ctx, 0x00901ffe); /* ?? */
cp_out (ctx, 0x01940000); /* ?? */
cp_lsr (ctx, 0x20);
cp_out (ctx, 0x0060000b); /* ?? */
cp_wait(ctx, UNK57, CLEAR);
cp_out (ctx, 0x0060000c); /* ?? */
cp_bra (ctx, ALWAYS, TRUE, cp_swap_state);
/* setup for context save */
cp_name(ctx, cp_setup_save);
cp_set (ctx, SWAP_DIRECTION, SAVE);
/* general PGRAPH state */
cp_name(ctx, cp_swap_state);
cp_pos (ctx, 0x00020/4);
nv40_graph_construct_general(ctx);
cp_wait(ctx, STATUS, IDLE);
/* 3D state, block 1 */
cp_bra (ctx, UNK54, CLEAR, cp_prepare_exit);
nv40_graph_construct_state3d(ctx);
cp_wait(ctx, STATUS, IDLE);
/* 3D state, block 2 */
nv40_graph_construct_state3d_2(ctx);
/* Some other block of "random" state */
nv40_graph_construct_state3d_3(ctx);
/* Per-vertex shader state */
cp_pos (ctx, ctx->ctxvals_pos);
nv40_graph_construct_shader(ctx);
/* pre-exit state updates */
cp_name(ctx, cp_prepare_exit);
cp_bra (ctx, SWAP_DIRECTION, SAVE, cp_check_load);
cp_bra (ctx, USER_SAVE, PENDING, cp_exit);
cp_out (ctx, CP_NEXT_TO_CURRENT);
cp_name(ctx, cp_exit);
cp_set (ctx, USER_SAVE, NOT_PENDING);
cp_set (ctx, USER_LOAD, NOT_PENDING);
cp_out (ctx, CP_END);
}

View File

@ -45,7 +45,7 @@ nv50_crtc_lut_load(struct drm_crtc *crtc)
void __iomem *lut = nvbo_kmap_obj_iovirtual(nv_crtc->lut.nvbo);
int i;
NV_DEBUG(crtc->dev, "\n");
NV_DEBUG_KMS(crtc->dev, "\n");
for (i = 0; i < 256; i++) {
writew(nv_crtc->lut.r[i] >> 2, lut + 8*i + 0);
@ -68,8 +68,8 @@ nv50_crtc_blank(struct nouveau_crtc *nv_crtc, bool blanked)
struct nouveau_channel *evo = dev_priv->evo;
int index = nv_crtc->index, ret;
NV_DEBUG(dev, "index %d\n", nv_crtc->index);
NV_DEBUG(dev, "%s\n", blanked ? "blanked" : "unblanked");
NV_DEBUG_KMS(dev, "index %d\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "%s\n", blanked ? "blanked" : "unblanked");
if (blanked) {
nv_crtc->cursor.hide(nv_crtc, false);
@ -139,7 +139,7 @@ nv50_crtc_set_dither(struct nouveau_crtc *nv_crtc, bool on, bool update)
struct nouveau_channel *evo = dev_priv->evo;
int ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
ret = RING_SPACE(evo, 2 + (update ? 2 : 0));
if (ret) {
@ -193,7 +193,7 @@ nv50_crtc_set_scale(struct nouveau_crtc *nv_crtc, int scaling_mode, bool update)
uint32_t outX, outY, horiz, vert;
int ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
switch (scaling_mode) {
case DRM_MODE_SCALE_NONE:
@ -301,7 +301,7 @@ nv50_crtc_destroy(struct drm_crtc *crtc)
struct drm_device *dev = crtc->dev;
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
if (!crtc)
return;
@ -433,7 +433,7 @@ nv50_crtc_prepare(struct drm_crtc *crtc)
struct drm_device *dev = crtc->dev;
struct drm_encoder *encoder;
NV_DEBUG(dev, "index %d\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "index %d\n", nv_crtc->index);
/* Disconnect all unused encoders. */
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
@ -458,7 +458,7 @@ nv50_crtc_commit(struct drm_crtc *crtc)
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
int ret;
NV_DEBUG(dev, "index %d\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "index %d\n", nv_crtc->index);
nv50_crtc_blank(nv_crtc, false);
@ -497,7 +497,7 @@ nv50_crtc_do_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct nouveau_framebuffer *fb = nouveau_framebuffer(drm_fb);
int ret, format;
NV_DEBUG(dev, "index %d\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "index %d\n", nv_crtc->index);
switch (drm_fb->depth) {
case 8:
@ -612,7 +612,7 @@ nv50_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
*nv_crtc->mode = *adjusted_mode;
NV_DEBUG(dev, "index %d\n", nv_crtc->index);
NV_DEBUG_KMS(dev, "index %d\n", nv_crtc->index);
hsync_dur = adjusted_mode->hsync_end - adjusted_mode->hsync_start;
vsync_dur = adjusted_mode->vsync_end - adjusted_mode->vsync_start;
@ -706,7 +706,7 @@ nv50_crtc_create(struct drm_device *dev, int index)
struct nouveau_crtc *nv_crtc = NULL;
int ret, i;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
nv_crtc = kzalloc(sizeof(*nv_crtc), GFP_KERNEL);
if (!nv_crtc)

View File

@ -41,7 +41,7 @@ nv50_cursor_show(struct nouveau_crtc *nv_crtc, bool update)
struct drm_device *dev = nv_crtc->base.dev;
int ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
if (update && nv_crtc->cursor.visible)
return;
@ -76,7 +76,7 @@ nv50_cursor_hide(struct nouveau_crtc *nv_crtc, bool update)
struct drm_device *dev = nv_crtc->base.dev;
int ret;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
if (update && !nv_crtc->cursor.visible)
return;
@ -116,7 +116,7 @@ nv50_cursor_set_pos(struct nouveau_crtc *nv_crtc, int x, int y)
static void
nv50_cursor_set_offset(struct nouveau_crtc *nv_crtc, uint32_t offset)
{
NV_DEBUG(nv_crtc->base.dev, "\n");
NV_DEBUG_KMS(nv_crtc->base.dev, "\n");
if (offset == nv_crtc->cursor.offset)
return;
@ -143,7 +143,7 @@ nv50_cursor_fini(struct nouveau_crtc *nv_crtc)
struct drm_device *dev = nv_crtc->base.dev;
int idx = nv_crtc->index;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
nv_wr32(dev, NV50_PDISPLAY_CURSOR_CURSOR_CTRL2(idx), 0);
if (!nv_wait(NV50_PDISPLAY_CURSOR_CURSOR_CTRL2(idx),

View File

@ -44,7 +44,7 @@ nv50_dac_disconnect(struct nouveau_encoder *nv_encoder)
struct nouveau_channel *evo = dev_priv->evo;
int ret;
NV_DEBUG(dev, "Disconnecting DAC %d\n", nv_encoder->or);
NV_DEBUG_KMS(dev, "Disconnecting DAC %d\n", nv_encoder->or);
ret = RING_SPACE(evo, 2);
if (ret) {
@ -81,11 +81,11 @@ nv50_dac_detect(struct drm_encoder *encoder, struct drm_connector *connector)
/* Use bios provided value if possible. */
if (dev_priv->vbios->dactestval) {
load_pattern = dev_priv->vbios->dactestval;
NV_DEBUG(dev, "Using bios provided load_pattern of %d\n",
NV_DEBUG_KMS(dev, "Using bios provided load_pattern of %d\n",
load_pattern);
} else {
load_pattern = 340;
NV_DEBUG(dev, "Using default load_pattern of %d\n",
NV_DEBUG_KMS(dev, "Using default load_pattern of %d\n",
load_pattern);
}
@ -103,9 +103,9 @@ nv50_dac_detect(struct drm_encoder *encoder, struct drm_connector *connector)
status = connector_status_connected;
if (status == connector_status_connected)
NV_DEBUG(dev, "Load was detected on output with or %d\n", or);
NV_DEBUG_KMS(dev, "Load was detected on output with or %d\n", or);
else
NV_DEBUG(dev, "Load was not detected on output with or %d\n", or);
NV_DEBUG_KMS(dev, "Load was not detected on output with or %d\n", or);
return status;
}
@ -118,7 +118,7 @@ nv50_dac_dpms(struct drm_encoder *encoder, int mode)
uint32_t val;
int or = nv_encoder->or;
NV_DEBUG(dev, "or %d mode %d\n", or, mode);
NV_DEBUG_KMS(dev, "or %d mode %d\n", or, mode);
/* wait for it to be done */
if (!nv_wait(NV50_PDISPLAY_DAC_DPMS_CTRL(or),
@ -173,7 +173,7 @@ nv50_dac_mode_fixup(struct drm_encoder *encoder, struct drm_display_mode *mode,
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct nouveau_connector *connector;
NV_DEBUG(encoder->dev, "or %d\n", nv_encoder->or);
NV_DEBUG_KMS(encoder->dev, "or %d\n", nv_encoder->or);
connector = nouveau_encoder_connector_get(nv_encoder);
if (!connector) {
@ -213,7 +213,7 @@ nv50_dac_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode,
uint32_t mode_ctl = 0, mode_ctl2 = 0;
int ret;
NV_DEBUG(dev, "or %d\n", nv_encoder->or);
NV_DEBUG_KMS(dev, "or %d\n", nv_encoder->or);
nv50_dac_dpms(encoder, DRM_MODE_DPMS_ON);
@ -264,7 +264,7 @@ nv50_dac_destroy(struct drm_encoder *encoder)
if (!encoder)
return;
NV_DEBUG(encoder->dev, "\n");
NV_DEBUG_KMS(encoder->dev, "\n");
drm_encoder_cleanup(encoder);
kfree(nv_encoder);
@ -280,7 +280,7 @@ nv50_dac_create(struct drm_device *dev, struct dcb_entry *entry)
struct nouveau_encoder *nv_encoder;
struct drm_encoder *encoder;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
NV_INFO(dev, "Detected a DAC output\n");
nv_encoder = kzalloc(sizeof(*nv_encoder), GFP_KERNEL);

View File

@ -188,7 +188,7 @@ nv50_display_init(struct drm_device *dev)
uint64_t start;
int ret, i;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
nv_wr32(dev, 0x00610184, nv_rd32(dev, 0x00614004));
/*
@ -232,7 +232,7 @@ nv50_display_init(struct drm_device *dev)
nv_wr32(dev, NV50_PDISPLAY_UNK_380, 0);
/* RAM is clamped to 256 MiB. */
ram_amount = nouveau_mem_fb_amount(dev);
NV_DEBUG(dev, "ram_amount %d\n", ram_amount);
NV_DEBUG_KMS(dev, "ram_amount %d\n", ram_amount);
if (ram_amount > 256*1024*1024)
ram_amount = 256*1024*1024;
nv_wr32(dev, NV50_PDISPLAY_RAM_AMOUNT, ram_amount - 1);
@ -398,7 +398,7 @@ static int nv50_display_disable(struct drm_device *dev)
struct drm_crtc *drm_crtc;
int ret, i;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
list_for_each_entry(drm_crtc, &dev->mode_config.crtc_list, head) {
struct nouveau_crtc *crtc = nouveau_crtc(drm_crtc);
@ -469,7 +469,7 @@ int nv50_display_create(struct drm_device *dev)
uint32_t connector[16] = {};
int ret, i;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
/* init basic kernel modesetting */
drm_mode_config_init(dev);
@ -573,7 +573,7 @@ int nv50_display_destroy(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
NV_DEBUG(dev, "\n");
NV_DEBUG_KMS(dev, "\n");
drm_mode_config_cleanup(dev);
@ -617,7 +617,7 @@ nv50_display_irq_head(struct drm_device *dev, int *phead,
* CRTC separately, and submission will be blocked by the GPU
* until we handle each in turn.
*/
NV_DEBUG(dev, "0x610030: 0x%08x\n", unk30);
NV_DEBUG_KMS(dev, "0x610030: 0x%08x\n", unk30);
head = ffs((unk30 >> 9) & 3) - 1;
if (head < 0)
return -EINVAL;
@ -661,7 +661,7 @@ nv50_display_irq_head(struct drm_device *dev, int *phead,
or = i;
}
NV_DEBUG(dev, "type %d, or %d\n", type, or);
NV_DEBUG_KMS(dev, "type %d, or %d\n", type, or);
if (type == OUTPUT_ANY) {
NV_ERROR(dev, "unknown encoder!!\n");
return -1;
@ -811,7 +811,7 @@ nv50_display_unk20_handler(struct drm_device *dev)
pclk = nv_rd32(dev, NV50_PDISPLAY_CRTC_P(head, CLOCK)) & 0x3fffff;
script = nv50_display_script_select(dev, dcbent, pclk);
NV_DEBUG(dev, "head %d pxclk: %dKHz\n", head, pclk);
NV_DEBUG_KMS(dev, "head %d pxclk: %dKHz\n", head, pclk);
if (dcbent->type != OUTPUT_DP)
nouveau_bios_run_display_table(dev, dcbent, 0, -2);
@ -870,7 +870,7 @@ nv50_display_irq_handler_bh(struct work_struct *work)
uint32_t intr0 = nv_rd32(dev, NV50_PDISPLAY_INTR_0);
uint32_t intr1 = nv_rd32(dev, NV50_PDISPLAY_INTR_1);
NV_DEBUG(dev, "PDISPLAY_INTR_BH 0x%08x 0x%08x\n", intr0, intr1);
NV_DEBUG_KMS(dev, "PDISPLAY_INTR_BH 0x%08x 0x%08x\n", intr0, intr1);
if (intr1 & NV50_PDISPLAY_INTR_1_CLK_UNK10)
nv50_display_unk10_handler(dev);
@ -974,7 +974,7 @@ nv50_display_irq_handler(struct drm_device *dev)
uint32_t intr1 = nv_rd32(dev, NV50_PDISPLAY_INTR_1);
uint32_t clock;
NV_DEBUG(dev, "PDISPLAY_INTR 0x%08x 0x%08x\n", intr0, intr1);
NV_DEBUG_KMS(dev, "PDISPLAY_INTR 0x%08x 0x%08x\n", intr0, intr1);
if (!intr0 && !(intr1 & ~delayed))
break;

View File

@ -416,7 +416,7 @@ nv50_fifo_unload_context(struct drm_device *dev)
NV_DEBUG(dev, "\n");
chid = pfifo->channel_id(dev);
if (chid < 0 || chid >= dev_priv->engine.fifo.channels)
if (chid < 1 || chid >= dev_priv->engine.fifo.channels - 1)
return 0;
chan = dev_priv->fifos[chid];

Some files were not shown because too many files have changed in this diff Show More