Merge tag 'drm-misc-next-2016-12-30' of git://anongit.freedesktop.org/git/drm-misc into drm-intel-next-queued

Directly merge drm-misc into drm-intel since Dave is on vacation and
we need the various drm-misc patches (fb format rework, drm mm fixes,
selftest framework and others). Also pulled back -rc2 in first to
resync with drm-intel-fixes and make sure I can reuse the exact rerere
solutions from drm-tip for safety, and because I'm lazy.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
This commit is contained in:
Daniel Vetter 2017-01-04 11:41:10 +01:00
commit ef426c1038
226 changed files with 5261 additions and 2018 deletions

View File

@ -0,0 +1,46 @@
THS8135 Video DAC
-----------------
This is the binding for Texas Instruments THS8135 Video DAC bridge.
Required properties:
- compatible: Must be "ti,ths8135"
Required nodes:
This device has two video ports. Their connections are modelled using the OF
graph bindings specified in Documentation/devicetree/bindings/graph.txt.
- Video port 0 for RGB input
- Video port 1 for VGA output
Example
-------
vga-bridge {
compatible = "ti,ths8135";
#address-cells = <1>;
#size-cells = <0>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
vga_bridge_in: endpoint {
remote-endpoint = <&lcdc_out_vga>;
};
};
port@1 {
reg = <1>;
vga_bridge_out: endpoint {
remote-endpoint = <&vga_con_in>;
};
};
};
};

View File

@ -16,7 +16,7 @@ Required properties:
"clk_ade_core" for the ADE core clock. "clk_ade_core" for the ADE core clock.
"clk_codec_jpeg" for the media NOC QoS clock, which use the same clock with "clk_codec_jpeg" for the media NOC QoS clock, which use the same clock with
jpeg codec. jpeg codec.
"clk_ade_pix" for the ADE pixel clok. "clk_ade_pix" for the ADE pixel clock.
- assigned-clocks: Should contain "clk_ade_core" and "clk_codec_jpeg" clocks' - assigned-clocks: Should contain "clk_ade_core" and "clk_codec_jpeg" clocks'
phandle + clock-specifier pairs. phandle + clock-specifier pairs.
- assigned-clock-rates: clock rates, one for each entry in assigned-clocks. - assigned-clock-rates: clock rates, one for each entry in assigned-clocks.

View File

@ -1,482 +0,0 @@
DMA Buffer Sharing API Guide
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sumit Semwal
<sumit dot semwal at linaro dot org>
<sumit dot semwal at ti dot com>
This document serves as a guide to device-driver writers on what is the dma-buf
buffer sharing API, how to use it for exporting and using shared buffers.
Any device driver which wishes to be a part of DMA buffer sharing, can do so as
either the 'exporter' of buffers, or the 'user' of buffers.
Say a driver A wants to use buffers created by driver B, then we call B as the
exporter, and A as buffer-user.
The exporter
- implements and manages operations[1] for the buffer
- allows other users to share the buffer by using dma_buf sharing APIs,
- manages the details of buffer allocation,
- decides about the actual backing storage where this allocation happens,
- takes care of any migration of scatterlist - for all (shared) users of this
buffer,
The buffer-user
- is one of (many) sharing users of the buffer.
- doesn't need to worry about how the buffer is allocated, or where.
- needs a mechanism to get access to the scatterlist that makes up this buffer
in memory, mapped into its own address space, so it can access the same area
of memory.
dma-buf operations for device dma only
--------------------------------------
The dma_buf buffer sharing API usage contains the following steps:
1. Exporter announces that it wishes to export a buffer
2. Userspace gets the file descriptor associated with the exported buffer, and
passes it around to potential buffer-users based on use case
3. Each buffer-user 'connects' itself to the buffer
4. When needed, buffer-user requests access to the buffer from exporter
5. When finished with its use, the buffer-user notifies end-of-DMA to exporter
6. when buffer-user is done using this buffer completely, it 'disconnects'
itself from the buffer.
1. Exporter's announcement of buffer export
The buffer exporter announces its wish to export a buffer. In this, it
connects its own private buffer data, provides implementation for operations
that can be performed on the exported dma_buf, and flags for the file
associated with this buffer. All these fields are filled in struct
dma_buf_export_info, defined via the DEFINE_DMA_BUF_EXPORT_INFO macro.
Interface:
DEFINE_DMA_BUF_EXPORT_INFO(exp_info)
struct dma_buf *dma_buf_export(struct dma_buf_export_info *exp_info)
If this succeeds, dma_buf_export allocates a dma_buf structure, and
returns a pointer to the same. It also associates an anonymous file with this
buffer, so it can be exported. On failure to allocate the dma_buf object,
it returns NULL.
'exp_name' in struct dma_buf_export_info is the name of exporter - to
facilitate information while debugging. It is set to KBUILD_MODNAME by
default, so exporters don't have to provide a specific name, if they don't
wish to.
DEFINE_DMA_BUF_EXPORT_INFO macro defines the struct dma_buf_export_info,
zeroes it out and pre-populates exp_name in it.
2. Userspace gets a handle to pass around to potential buffer-users
Userspace entity requests for a file-descriptor (fd) which is a handle to the
anonymous file associated with the buffer. It can then share the fd with other
drivers and/or processes.
Interface:
int dma_buf_fd(struct dma_buf *dmabuf, int flags)
This API installs an fd for the anonymous file associated with this buffer;
returns either 'fd', or error.
3. Each buffer-user 'connects' itself to the buffer
Each buffer-user now gets a reference to the buffer, using the fd passed to
it.
Interface:
struct dma_buf *dma_buf_get(int fd)
This API will return a reference to the dma_buf, and increment refcount for
it.
After this, the buffer-user needs to attach its device with the buffer, which
helps the exporter to know of device buffer constraints.
Interface:
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev)
This API returns reference to an attachment structure, which is then used
for scatterlist operations. It will optionally call the 'attach' dma_buf
operation, if provided by the exporter.
The dma-buf sharing framework does the bookkeeping bits related to managing
the list of all attachments to a buffer.
Until this stage, the buffer-exporter has the option to choose not to actually
allocate the backing storage for this buffer, but wait for the first buffer-user
to request use of buffer for allocation.
4. When needed, buffer-user requests access to the buffer
Whenever a buffer-user wants to use the buffer for any DMA, it asks for
access to the buffer using dma_buf_map_attachment API. At least one attach to
the buffer must have happened before map_dma_buf can be called.
Interface:
struct sg_table * dma_buf_map_attachment(struct dma_buf_attachment *,
enum dma_data_direction);
This is a wrapper to dma_buf->ops->map_dma_buf operation, which hides the
"dma_buf->ops->" indirection from the users of this interface.
In struct dma_buf_ops, map_dma_buf is defined as
struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *,
enum dma_data_direction);
It is one of the buffer operations that must be implemented by the exporter.
It should return the sg_table containing scatterlist for this buffer, mapped
into caller's address space.
If this is being called for the first time, the exporter can now choose to
scan through the list of attachments for this buffer, collate the requirements
of the attached devices, and choose an appropriate backing storage for the
buffer.
Based on enum dma_data_direction, it might be possible to have multiple users
accessing at the same time (for reading, maybe), or any other kind of sharing
that the exporter might wish to make available to buffer-users.
map_dma_buf() operation can return -EINTR if it is interrupted by a signal.
5. When finished, the buffer-user notifies end-of-DMA to exporter
Once the DMA for the current buffer-user is over, it signals 'end-of-DMA' to
the exporter using the dma_buf_unmap_attachment API.
Interface:
void dma_buf_unmap_attachment(struct dma_buf_attachment *,
struct sg_table *);
This is a wrapper to dma_buf->ops->unmap_dma_buf() operation, which hides the
"dma_buf->ops->" indirection from the users of this interface.
In struct dma_buf_ops, unmap_dma_buf is defined as
void (*unmap_dma_buf)(struct dma_buf_attachment *,
struct sg_table *,
enum dma_data_direction);
unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like
map_dma_buf, this API also must be implemented by the exporter.
6. when buffer-user is done using this buffer, it 'disconnects' itself from the
buffer.
After the buffer-user has no more interest in using this buffer, it should
disconnect itself from the buffer:
- it first detaches itself from the buffer.
Interface:
void dma_buf_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *dmabuf_attach);
This API removes the attachment from the list in dmabuf, and optionally calls
dma_buf->ops->detach(), if provided by exporter, for any housekeeping bits.
- Then, the buffer-user returns the buffer reference to exporter.
Interface:
void dma_buf_put(struct dma_buf *dmabuf);
This API then reduces the refcount for this buffer.
If, as a result of this call, the refcount becomes 0, the 'release' file
operation related to this fd is called. It calls the dmabuf->ops->release()
operation in turn, and frees the memory allocated for dmabuf when exported.
NOTES:
- Importance of attach-detach and {map,unmap}_dma_buf operation pairs
The attach-detach calls allow the exporter to figure out backing-storage
constraints for the currently-interested devices. This allows preferential
allocation, and/or migration of pages across different types of storage
available, if possible.
Bracketing of DMA access with {map,unmap}_dma_buf operations is essential
to allow just-in-time backing of storage, and migration mid-way through a
use-case.
- Migration of backing storage if needed
If after
- at least one map_dma_buf has happened,
- and the backing storage has been allocated for this buffer,
another new buffer-user intends to attach itself to this buffer, it might
be allowed, if possible for the exporter.
In case it is allowed by the exporter:
if the new buffer-user has stricter 'backing-storage constraints', and the
exporter can handle these constraints, the exporter can just stall on the
map_dma_buf until all outstanding access is completed (as signalled by
unmap_dma_buf).
Once all users have finished accessing and have unmapped this buffer, the
exporter could potentially move the buffer to the stricter backing-storage,
and then allow further {map,unmap}_dma_buf operations from any buffer-user
from the migrated backing-storage.
If the exporter cannot fulfill the backing-storage constraints of the new
buffer-user device as requested, dma_buf_attach() would return an error to
denote non-compatibility of the new buffer-sharing request with the current
buffer.
If the exporter chooses not to allow an attach() operation once a
map_dma_buf() API has been called, it simply returns an error.
Kernel cpu access to a dma-buf buffer object
--------------------------------------------
The motivation to allow cpu access from the kernel to a dma-buf object from the
importers side are:
- fallback operations, e.g. if the devices is connected to a usb bus and the
kernel needs to shuffle the data around first before sending it away.
- full transparency for existing users on the importer side, i.e. userspace
should not notice the difference between a normal object from that subsystem
and an imported one backed by a dma-buf. This is really important for drm
opengl drivers that expect to still use all the existing upload/download
paths.
Access to a dma_buf from the kernel context involves three steps:
1. Prepare access, which invalidate any necessary caches and make the object
available for cpu access.
2. Access the object page-by-page with the dma_buf map apis
3. Finish access, which will flush any necessary cpu caches and free reserved
resources.
1. Prepare access
Before an importer can access a dma_buf object with the cpu from the kernel
context, it needs to notify the exporter of the access that is about to
happen.
Interface:
int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
This allows the exporter to ensure that the memory is actually available for
cpu access - the exporter might need to allocate or swap-in and pin the
backing storage. The exporter also needs to ensure that cpu access is
coherent for the access direction. The direction can be used by the exporter
to optimize the cache flushing, i.e. access with a different direction (read
instead of write) might return stale or even bogus data (e.g. when the
exporter needs to copy the data to temporary storage).
This step might fail, e.g. in oom conditions.
2. Accessing the buffer
To support dma_buf objects residing in highmem cpu access is page-based using
an api similar to kmap. Accessing a dma_buf is done in aligned chunks of
PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns
a pointer in kernel virtual address space. Afterwards the chunk needs to be
unmapped again. There is no limit on how often a given chunk can be mapped
and unmapped, i.e. the importer does not need to call begin_cpu_access again
before mapping the same chunk again.
Interfaces:
void *dma_buf_kmap(struct dma_buf *, unsigned long);
void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
There are also atomic variants of these interfaces. Like for kmap they
facilitate non-blocking fast-paths. Neither the importer nor the exporter (in
the callback) is allowed to block when using these.
Interfaces:
void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
For importers all the restrictions of using kmap apply, like the limited
supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2
atomic dma_buf kmaps at the same time (in any given process context).
dma_buf kmap calls outside of the range specified in begin_cpu_access are
undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
the partial chunks at the beginning and end but may return stale or bogus
data outside of the range (in these partial chunks).
Note that these calls need to always succeed. The exporter needs to complete
any preparations that might fail in begin_cpu_access.
For some cases the overhead of kmap can be too high, a vmap interface
is introduced. This interface should be used very carefully, as vmalloc
space is a limited resources on many architectures.
Interfaces:
void *dma_buf_vmap(struct dma_buf *dmabuf)
void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
The vmap call can fail if there is no vmap support in the exporter, or if it
runs out of vmalloc space. Fallback to kmap should be implemented. Note that
the dma-buf layer keeps a reference count for all vmap access and calls down
into the exporter's vmap function only when no vmapping exists, and only
unmaps it once. Protection against concurrent vmap/vunmap calls is provided
by taking the dma_buf->lock mutex.
3. Finish access
When the importer is done accessing the CPU, it needs to announce this to
the exporter (to facilitate cache flushing and unpinning of any pinned
resources). The result of any dma_buf kmap calls after end_cpu_access is
undefined.
Interface:
void dma_buf_end_cpu_access(struct dma_buf *dma_buf,
enum dma_data_direction dir);
Direct Userspace Access/mmap Support
------------------------------------
Being able to mmap an export dma-buf buffer object has 2 main use-cases:
- CPU fallback processing in a pipeline and
- supporting existing mmap interfaces in importers.
1. CPU fallback processing in a pipeline
In many processing pipelines it is sometimes required that the cpu can access
the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
the need to handle this specially in userspace frameworks for buffer sharing
it's ideal if the dma_buf fd itself can be used to access the backing storage
from userspace using mmap.
Furthermore Android's ION framework already supports this (and is otherwise
rather similar to dma-buf from a userspace consumer side with using fds as
handles, too). So it's beneficial to support this in a similar fashion on
dma-buf to have a good transition path for existing Android userspace.
No special interfaces, userspace simply calls mmap on the dma-buf fd, making
sure that the cache synchronization ioctl (DMA_BUF_IOCTL_SYNC) is *always*
used when the access happens. Note that DMA_BUF_IOCTL_SYNC can fail with
-EAGAIN or -EINTR, in which case it must be restarted.
Some systems might need some sort of cache coherency management e.g. when
CPU and GPU domains are being accessed through dma-buf at the same time. To
circumvent this problem there are begin/end coherency markers, that forward
directly to existing dma-buf device drivers vfunc hooks. Userspace can make
use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The sequence
would be used like following:
- mmap dma-buf fd
- for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
want (with the new data being consumed by the GPU or say scanout device)
- munmap once you don't need the buffer any more
For correctness and optimal performance, it is always required to use
SYNC_START and SYNC_END before and after, respectively, when accessing the
mapped address. Userspace cannot rely on coherent access, even when there
are systems where it just works without calling these ioctls.
2. Supporting existing mmap interfaces in importers
Similar to the motivation for kernel cpu access it is again important that
the userspace code of a given importing subsystem can use the same interfaces
with a imported dma-buf buffer object as with a native buffer object. This is
especially important for drm where the userspace part of contemporary OpenGL,
X, and other drivers is huge, and reworking them to use a different way to
mmap a buffer rather invasive.
The assumption in the current dma-buf interfaces is that redirecting the
initial mmap is all that's needed. A survey of some of the existing
subsystems shows that no driver seems to do any nefarious thing like syncing
up with outstanding asynchronous processing on the device or allocating
special resources at fault time. So hopefully this is good enough, since
adding interfaces to intercept pagefaults and allow pte shootdowns would
increase the complexity quite a bit.
Interface:
int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
If the importing subsystem simply provides a special-purpose mmap call to set
up a mapping in userspace, calling do_mmap with dma_buf->file will equally
achieve that for a dma-buf object.
3. Implementation notes for exporters
Because dma-buf buffers have invariant size over their lifetime, the dma-buf
core checks whether a vma is too large and rejects such mappings. The
exporter hence does not need to duplicate this check.
Because existing importing subsystems might presume coherent mappings for
userspace, the exporter needs to set up a coherent mapping. If that's not
possible, it needs to fake coherency by manually shooting down ptes when
leaving the cpu domain and flushing caches at fault time. Note that all the
dma_buf files share the same anon inode, hence the exporter needs to replace
the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
required. This is because the kernel uses the underlying inode's address_space
for vma tracking (and hence pte tracking at shootdown time with
unmap_mapping_range).
If the above shootdown dance turns out to be too expensive in certain
scenarios, we can extend dma-buf with a more explicit cache tracking scheme
for userspace mappings. But the current assumption is that using mmap is
always a slower path, so some inefficiencies should be acceptable.
Exporters that shoot down mappings (for any reasons) shall not do any
synchronization at fault time with outstanding device operations.
Synchronization is an orthogonal issue to sharing the backing storage of a
buffer and hence should not be handled by dma-buf itself. This is explicitly
mentioned here because many people seem to want something like this, but if
different exporters handle this differently, buffer sharing can fail in
interesting ways depending upong the exporter (if userspace starts depending
upon this implicit synchronization).
Other Interfaces Exposed to Userspace on the dma-buf FD
------------------------------------------------------
- Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow
the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other
llseek operation will report -EINVAL.
If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all
cases. Userspace can use this to detect support for discovering the dma-buf
size using llseek.
Miscellaneous notes
-------------------
- Any exporters or users of the dma-buf buffer sharing framework must have
a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
- In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set
on the file descriptor. This is not just a resource leak, but a
potential security hole. It could give the newly exec'd application
access to buffers, via the leaked fd, to which it should otherwise
not be permitted access.
The problem with doing this via a separate fcntl() call, versus doing it
atomically when the fd is created, is that this is inherently racy in a
multi-threaded app[3]. The issue is made worse when it is library code
opening/creating the file descriptor, as the application may not even be
aware of the fd's.
To avoid this problem, userspace must have a way to request O_CLOEXEC
flag be set when the dma-buf fd is created. So any API provided by
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- If an exporter needs to manually flush caches and hence needs to fake
coherency for mmap support, it needs to be able to zap all the ptes pointing
at the backing storage. Now linux mm needs a struct address_space associated
with the struct file stored in vma->vm_file to do that with the function
unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
with the anon_file struct file, i.e. all dma_bufs share the same file.
Hence exporters need to setup their own file (and address_space) association
by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
callback. In the specific case of a gem driver the exporter could use the
shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
zap ptes by unmapping the corresponding range of the struct address_space
associated with their own file.
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
[3] https://lwn.net/Articles/236486/

View File

@ -17,6 +17,98 @@ shared or exclusive fence(s) associated with the buffer.
Shared DMA Buffers Shared DMA Buffers
------------------ ------------------
This document serves as a guide to device-driver writers on what is the dma-buf
buffer sharing API, how to use it for exporting and using shared buffers.
Any device driver which wishes to be a part of DMA buffer sharing, can do so as
either the 'exporter' of buffers, or the 'user' or 'importer' of buffers.
Say a driver A wants to use buffers created by driver B, then we call B as the
exporter, and A as buffer-user/importer.
The exporter
- implements and manages operations in :c:type:`struct dma_buf_ops
<dma_buf_ops>` for the buffer,
- allows other users to share the buffer by using dma_buf sharing APIs,
- manages the details of buffer allocation, wrapped int a :c:type:`struct
dma_buf <dma_buf>`,
- decides about the actual backing storage where this allocation happens,
- and takes care of any migration of scatterlist - for all (shared) users of
this buffer.
The buffer-user
- is one of (many) sharing users of the buffer.
- doesn't need to worry about how the buffer is allocated, or where.
- and needs a mechanism to get access to the scatterlist that makes up this
buffer in memory, mapped into its own address space, so it can access the
same area of memory. This interface is provided by :c:type:`struct
dma_buf_attachment <dma_buf_attachment>`.
Any exporters or users of the dma-buf buffer sharing framework must have a
'select DMA_SHARED_BUFFER' in their respective Kconfigs.
Userspace Interface Notes
~~~~~~~~~~~~~~~~~~~~~~~~~
Mostly a DMA buffer file descriptor is simply an opaque object for userspace,
and hence the generic interface exposed is very minimal. There's a few things to
consider though:
- Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow
the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other
llseek operation will report -EINVAL.
If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all
cases. Userspace can use this to detect support for discovering the dma-buf
size using llseek.
- In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set
on the file descriptor. This is not just a resource leak, but a
potential security hole. It could give the newly exec'd application
access to buffers, via the leaked fd, to which it should otherwise
not be permitted access.
The problem with doing this via a separate fcntl() call, versus doing it
atomically when the fd is created, is that this is inherently racy in a
multi-threaded app[3]. The issue is made worse when it is library code
opening/creating the file descriptor, as the application may not even be
aware of the fd's.
To avoid this problem, userspace must have a way to request O_CLOEXEC
flag be set when the dma-buf fd is created. So any API provided by
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- Memory mapping the contents of the DMA buffer is also supported. See the
discussion below on `CPU Access to DMA Buffer Objects`_ for the full details.
- The DMA buffer FD is also pollable, see `Fence Poll Support`_ below for
details.
Basic Operation and Device DMA Access
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: dma buf device access
CPU Access to DMA Buffer Objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: cpu access
Fence Poll Support
~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: fence polling
Kernel Functions and Structures Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c .. kernel-doc:: drivers/dma-buf/dma-buf.c
:export: :export:

View File

@ -3965,7 +3965,7 @@ F: drivers/dma-buf/
F: include/linux/dma-buf* F: include/linux/dma-buf*
F: include/linux/reservation.h F: include/linux/reservation.h
F: include/linux/*fence.h F: include/linux/*fence.h
F: Documentation/dma-buf-sharing.txt F: Documentation/driver-api/dma-buf.rst
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
SYNC FILE FRAMEWORK SYNC FILE FRAMEWORK

View File

@ -124,6 +124,28 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
return base + offset; return base + offset;
} }
/**
* DOC: fence polling
*
* To support cross-device and cross-driver synchronization of buffer access
* implicit fences (represented internally in the kernel with struct &fence) can
* be attached to a &dma_buf. The glue for that and a few related things are
* provided in the &reservation_object structure.
*
* Userspace can query the state of these implicitly tracked fences using poll()
* and related system calls:
*
* - Checking for POLLIN, i.e. read access, can be use to query the state of the
* most recent write or exclusive fence.
*
* - Checking for POLLOUT, i.e. write access, can be used to query the state of
* all attached fences, shared and exclusive ones.
*
* Note that this only signals the completion of the respective fences, i.e. the
* DMA transfers are complete. Cache flushing and any other necessary
* preparations before CPU access can begin still need to happen.
*/
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
{ {
struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb; struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
@ -313,6 +335,37 @@ static inline int is_dma_buf_file(struct file *file)
return file->f_op == &dma_buf_fops; return file->f_op == &dma_buf_fops;
} }
/**
* DOC: dma buf device access
*
* For device DMA access to a shared DMA buffer the usual sequence of operations
* is fairly simple:
*
* 1. The exporter defines his exporter instance using
* DEFINE_DMA_BUF_EXPORT_INFO() and calls dma_buf_export() to wrap a private
* buffer object into a &dma_buf. It then exports that &dma_buf to userspace
* as a file descriptor by calling dma_buf_fd().
*
* 2. Userspace passes this file-descriptors to all drivers it wants this buffer
* to share with: First the filedescriptor is converted to a &dma_buf using
* dma_buf_get(). The the buffer is attached to the device using
* dma_buf_attach().
*
* Up to this stage the exporter is still free to migrate or reallocate the
* backing storage.
*
* 3. Once the buffer is attached to all devices userspace can inniate DMA
* access to the shared buffer. In the kernel this is done by calling
* dma_buf_map_attachment() and dma_buf_unmap_attachment().
*
* 4. Once a driver is done with a shared buffer it needs to call
* dma_buf_detach() (after cleaning up any mappings) and then release the
* reference acquired with dma_buf_get by calling dma_buf_put().
*
* For the detailed semantics exporters are expected to implement see
* &dma_buf_ops.
*/
/** /**
* dma_buf_export - Creates a new dma_buf, and associates an anon file * dma_buf_export - Creates a new dma_buf, and associates an anon file
* with this buffer, so it can be exported. * with this buffer, so it can be exported.
@ -320,13 +373,15 @@ static inline int is_dma_buf_file(struct file *file)
* Additionally, provide a name string for exporter; useful in debugging. * Additionally, provide a name string for exporter; useful in debugging.
* *
* @exp_info: [in] holds all the export related information provided * @exp_info: [in] holds all the export related information provided
* by the exporter. see struct dma_buf_export_info * by the exporter. see struct &dma_buf_export_info
* for further details. * for further details.
* *
* Returns, on success, a newly created dma_buf object, which wraps the * Returns, on success, a newly created dma_buf object, which wraps the
* supplied private data and operations for dma_buf_ops. On either missing * supplied private data and operations for dma_buf_ops. On either missing
* ops, or error in allocating struct dma_buf, will return negative error. * ops, or error in allocating struct dma_buf, will return negative error.
* *
* For most cases the easiest way to create @exp_info is through the
* %DEFINE_DMA_BUF_EXPORT_INFO macro.
*/ */
struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
{ {
@ -458,7 +513,12 @@ EXPORT_SYMBOL_GPL(dma_buf_get);
* dma_buf_put - decreases refcount of the buffer * dma_buf_put - decreases refcount of the buffer
* @dmabuf: [in] buffer to reduce refcount of * @dmabuf: [in] buffer to reduce refcount of
* *
* Uses file's refcounting done implicitly by fput() * Uses file's refcounting done implicitly by fput().
*
* If, as a result of this call, the refcount becomes 0, the 'release' file
* operation related to this fd is called. It calls the release operation of
* struct &dma_buf_ops in turn, and frees the memory allocated for dmabuf when
* exported.
*/ */
void dma_buf_put(struct dma_buf *dmabuf) void dma_buf_put(struct dma_buf *dmabuf)
{ {
@ -475,8 +535,17 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
* @dmabuf: [in] buffer to attach device to. * @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached. * @dev: [in] device to be attached.
* *
* Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on * Returns struct dma_buf_attachment pointer for this attachment. Attachments
* error. * must be cleaned up by calling dma_buf_detach().
*
* Returns:
*
* A pointer to newly created &dma_buf_attachment on success, or a negative
* error code wrapped into a pointer on failure.
*
* Note that this can fail if the backing storage of @dmabuf is in a place not
* accessible to @dev, and cannot be moved to a more suitable place. This is
* indicated with the error code -EBUSY.
*/ */
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev) struct device *dev)
@ -519,6 +588,7 @@ EXPORT_SYMBOL_GPL(dma_buf_attach);
* @dmabuf: [in] buffer to detach from. * @dmabuf: [in] buffer to detach from.
* @attach: [in] attachment to be detached; is free'd after this call. * @attach: [in] attachment to be detached; is free'd after this call.
* *
* Clean up a device attachment obtained by calling dma_buf_attach().
*/ */
void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
{ {
@ -543,7 +613,12 @@ EXPORT_SYMBOL_GPL(dma_buf_detach);
* @direction: [in] direction of DMA transfer * @direction: [in] direction of DMA transfer
* *
* Returns sg_table containing the scatterlist to be returned; returns ERR_PTR * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR
* on error. * on error. May return -EINTR if it is interrupted by a signal.
*
* A mapping must be unmapped again using dma_buf_map_attachment(). Note that
* the underlying backing storage is pinned for as long as a mapping exists,
* therefore users/importers should not hold onto a mapping for undue amounts of
* time.
*/ */
struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction) enum dma_data_direction direction)
@ -571,6 +646,7 @@ EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
* @sg_table: [in] scatterlist info of the buffer to unmap * @sg_table: [in] scatterlist info of the buffer to unmap
* @direction: [in] direction of DMA transfer * @direction: [in] direction of DMA transfer
* *
* This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
*/ */
void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
struct sg_table *sg_table, struct sg_table *sg_table,
@ -586,6 +662,122 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
} }
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
/**
* DOC: cpu access
*
* There are mutliple reasons for supporting CPU access to a dma buffer object:
*
* - Fallback operations in the kernel, for example when a device is connected
* over USB and the kernel needs to shuffle the data around first before
* sending it away. Cache coherency is handled by braketing any transactions
* with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
* access.
*
* To support dma_buf objects residing in highmem cpu access is page-based
* using an api similar to kmap. Accessing a dma_buf is done in aligned chunks
* of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which
* returns a pointer in kernel virtual address space. Afterwards the chunk
* needs to be unmapped again. There is no limit on how often a given chunk
* can be mapped and unmapped, i.e. the importer does not need to call
* begin_cpu_access again before mapping the same chunk again.
*
* Interfaces::
* void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
*
* There are also atomic variants of these interfaces. Like for kmap they
* facilitate non-blocking fast-paths. Neither the importer nor the exporter
* (in the callback) is allowed to block when using these.
*
* Interfaces::
* void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*);
*
* For importers all the restrictions of using kmap apply, like the limited
* supply of kmap_atomic slots. Hence an importer shall only hold onto at
* max 2 atomic dma_buf kmaps at the same time (in any given process context).
*
* dma_buf kmap calls outside of the range specified in begin_cpu_access are
* undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
* the partial chunks at the beginning and end but may return stale or bogus
* data outside of the range (in these partial chunks).
*
* Note that these calls need to always succeed. The exporter needs to
* complete any preparations that might fail in begin_cpu_access.
*
* For some cases the overhead of kmap can be too high, a vmap interface
* is introduced. This interface should be used very carefully, as vmalloc
* space is a limited resources on many architectures.
*
* Interfaces::
* void \*dma_buf_vmap(struct dma_buf \*dmabuf)
* void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
*
* The vmap call can fail if there is no vmap support in the exporter, or if
* it runs out of vmalloc space. Fallback to kmap should be implemented. Note
* that the dma-buf layer keeps a reference count for all vmap access and
* calls down into the exporter's vmap function only when no vmapping exists,
* and only unmaps it once. Protection against concurrent vmap/vunmap calls is
* provided by taking the dma_buf->lock mutex.
*
* - For full compatibility on the importer side with existing userspace
* interfaces, which might already support mmap'ing buffers. This is needed in
* many processing pipelines (e.g. feeding a software rendered image into a
* hardware pipeline, thumbnail creation, snapshots, ...). Also, Android's ION
* framework already supported this and for DMA buffer file descriptors to
* replace ION buffers mmap support was needed.
*
* There is no special interfaces, userspace simply calls mmap on the dma-buf
* fd. But like for CPU access there's a need to braket the actual access,
* which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
* DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
* be restarted.
*
* Some systems might need some sort of cache coherency management e.g. when
* CPU and GPU domains are being accessed through dma-buf at the same time.
* To circumvent this problem there are begin/end coherency markers, that
* forward directly to existing dma-buf device drivers vfunc hooks. Userspace
* can make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The
* sequence would be used like following:
*
* - mmap dma-buf fd
* - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
* to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
* want (with the new data being consumed by say the GPU or the scanout
* device)
* - munmap once you don't need the buffer any more
*
* For correctness and optimal performance, it is always required to use
* SYNC_START and SYNC_END before and after, respectively, when accessing the
* mapped address. Userspace cannot rely on coherent access, even when there
* are systems where it just works without calling these ioctls.
*
* - And as a CPU fallback in userspace processing pipelines.
*
* Similar to the motivation for kernel cpu access it is again important that
* the userspace code of a given importing subsystem can use the same
* interfaces with a imported dma-buf buffer object as with a native buffer
* object. This is especially important for drm where the userspace part of
* contemporary OpenGL, X, and other drivers is huge, and reworking them to
* use a different way to mmap a buffer rather invasive.
*
* The assumption in the current dma-buf interfaces is that redirecting the
* initial mmap is all that's needed. A survey of some of the existing
* subsystems shows that no driver seems to do any nefarious thing like
* syncing up with outstanding asynchronous processing on the device or
* allocating special resources at fault time. So hopefully this is good
* enough, since adding interfaces to intercept pagefaults and allow pte
* shootdowns would increase the complexity quite a bit.
*
* Interface::
* int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
* unsigned long);
*
* If the importing subsystem simply provides a special-purpose mmap call to
* set up a mapping in userspace, calling do_mmap with dma_buf->file will
* equally achieve that for a dma-buf object.
*/
static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
@ -611,6 +803,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
* @dmabuf: [in] buffer to prepare cpu access for. * @dmabuf: [in] buffer to prepare cpu access for.
* @direction: [in] length of range for cpu access. * @direction: [in] length of range for cpu access.
* *
* After the cpu access is complete the caller should call
* dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
* it guaranteed to be coherent with other DMA access.
*
* Can return negative error values, returns 0 on success. * Can return negative error values, returns 0 on success.
*/ */
int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
@ -643,6 +839,8 @@ EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
* @dmabuf: [in] buffer to complete cpu access for. * @dmabuf: [in] buffer to complete cpu access for.
* @direction: [in] length of range for cpu access. * @direction: [in] length of range for cpu access.
* *
* This terminates CPU access started with dma_buf_begin_cpu_access().
*
* Can return negative error values, returns 0 on success. * Can return negative error values, returns 0 on success.
*/ */
int dma_buf_end_cpu_access(struct dma_buf *dmabuf, int dma_buf_end_cpu_access(struct dma_buf *dmabuf,

View File

@ -67,9 +67,10 @@ static void fence_check_cb_func(struct dma_fence *f, struct dma_fence_cb *cb)
* sync_file_create() - creates a sync file * sync_file_create() - creates a sync file
* @fence: fence to add to the sync_fence * @fence: fence to add to the sync_fence
* *
* Creates a sync_file containg @fence. Once this is called, the sync_file * Creates a sync_file containg @fence. This function acquires and additional
* takes ownership of @fence. The sync_file can be released with * reference of @fence for the newly-created &sync_file, if it succeeds. The
* fput(sync_file->file). Returns the sync_file or NULL in case of error. * sync_file can be released with fput(sync_file->file). Returns the
* sync_file or NULL in case of error.
*/ */
struct sync_file *sync_file_create(struct dma_fence *fence) struct sync_file *sync_file_create(struct dma_fence *fence)
{ {
@ -90,13 +91,6 @@ struct sync_file *sync_file_create(struct dma_fence *fence)
} }
EXPORT_SYMBOL(sync_file_create); EXPORT_SYMBOL(sync_file_create);
/**
* sync_file_fdget() - get a sync_file from an fd
* @fd: fd referencing a fence
*
* Ensures @fd references a valid sync_file, increments the refcount of the
* backing file. Returns the sync_file or NULL in case of error.
*/
static struct sync_file *sync_file_fdget(int fd) static struct sync_file *sync_file_fdget(int fd)
{ {
struct file *file = fget(fd); struct file *file = fget(fd);
@ -468,4 +462,3 @@ static const struct file_operations sync_file_fops = {
.unlocked_ioctl = sync_file_ioctl, .unlocked_ioctl = sync_file_ioctl,
.compat_ioctl = sync_file_ioctl, .compat_ioctl = sync_file_ioctl,
}; };

View File

@ -48,6 +48,21 @@ config DRM_DEBUG_MM
If in doubt, say "N". If in doubt, say "N".
config DRM_DEBUG_MM_SELFTEST
tristate "kselftests for DRM range manager (struct drm_mm)"
depends on DRM
depends on DEBUG_KERNEL
select PRIME_NUMBERS
select DRM_LIB_RANDOM
default n
help
This option provides a kernel module that can be used to test
the DRM range manager (drm_mm) and its API. This option is not
useful for distributions or general kernels, but only for kernel
developers working on DRM and associated drivers.
If in doubt, say "N".
config DRM_KMS_HELPER config DRM_KMS_HELPER
tristate tristate
depends on DRM depends on DRM
@ -321,3 +336,7 @@ config DRM_SAVAGE
chipset. If M is selected the module will be called savage. chipset. If M is selected the module will be called savage.
endif # DRM_LEGACY endif # DRM_LEGACY
config DRM_LIB_RANDOM
bool
default n

View File

@ -18,6 +18,7 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_plane.o drm_color_mgmt.o drm_print.o \ drm_plane.o drm_color_mgmt.o drm_print.o \
drm_dumb_buffers.o drm_mode_config.o drm_dumb_buffers.o drm_mode_config.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_PCI) += ati_pcigart.o drm-$(CONFIG_PCI) += ati_pcigart.o
@ -37,6 +38,7 @@ drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o
drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o
obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/
CFLAGS_drm_trace_points.o := -I$(src) CFLAGS_drm_trace_points.o := -I$(src)

View File

@ -508,7 +508,7 @@ amdgpu_framebuffer_init(struct drm_device *dev,
{ {
int ret; int ret;
rfb->obj = obj; rfb->obj = obj;
drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd);
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs); ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
if (ret) { if (ret) {
rfb->obj = NULL; rfb->obj = NULL;

View File

@ -245,7 +245,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
strcpy(info->fix.id, "amdgpudrmfb"); strcpy(info->fix.id, "amdgpudrmfb");
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT; info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &amdgpufb_ops; info->fbops = &amdgpufb_ops;
@ -272,7 +272,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start);
DRM_INFO("vram apper at 0x%lX\n", (unsigned long)adev->mc.aper_base); DRM_INFO("vram apper at 0x%lX\n", (unsigned long)adev->mc.aper_base);
DRM_INFO("size %lu\n", (unsigned long)amdgpu_bo_size(abo)); DRM_INFO("size %lu\n", (unsigned long)amdgpu_bo_size(abo));
DRM_INFO("fb depth is %d\n", fb->depth); DRM_INFO("fb depth is %d\n", fb->format->depth);
DRM_INFO(" pitch is %d\n", fb->pitches[0]); DRM_INFO(" pitch is %d\n", fb->pitches[0]);
vga_switcheroo_client_fb_set(adev->ddev->pdev, info); vga_switcheroo_client_fb_set(adev->ddev->pdev, info);

View File

@ -61,10 +61,8 @@ static void amdgpu_hotplug_work_func(struct work_struct *work)
struct drm_connector *connector; struct drm_connector *connector;
mutex_lock(&mode_config->mutex); mutex_lock(&mode_config->mutex);
if (mode_config->num_connector) { list_for_each_entry(connector, &mode_config->connector_list, head)
list_for_each_entry(connector, &mode_config->connector_list, head) amdgpu_connector_hotplug(connector);
amdgpu_connector_hotplug(connector);
}
mutex_unlock(&mode_config->mutex); mutex_unlock(&mode_config->mutex);
/* Just fire off a uevent and let userspace tell us what to do */ /* Just fire off a uevent and let userspace tell us what to do */
drm_helper_hpd_irq_event(dev); drm_helper_hpd_irq_event(dev);

View File

@ -32,6 +32,7 @@
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include <drm/drm_dp_helper.h> #include <drm/drm_dp_helper.h>
#include <drm/drm_fixed.h> #include <drm/drm_fixed.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>

View File

@ -2072,7 +2072,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) { switch (target_fb->format->format) {
case DRM_FORMAT_C8: case DRM_FORMAT_C8:
fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0); fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0);
fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0); fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0);
@ -2145,7 +2145,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
break; break;
default: default:
DRM_ERROR("Unsupported screen format %s\n", DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name)); drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL; return -EINVAL;
} }
@ -2220,7 +2220,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v10_0_grph_enable(crtc, true); dce_v10_0_grph_enable(crtc, true);

View File

@ -2053,7 +2053,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) { switch (target_fb->format->format) {
case DRM_FORMAT_C8: case DRM_FORMAT_C8:
fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0); fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0);
fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0); fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0);
@ -2126,7 +2126,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
break; break;
default: default:
DRM_ERROR("Unsupported screen format %s\n", DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name)); drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL; return -EINVAL;
} }
@ -2201,7 +2201,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v11_0_grph_enable(crtc, true); dce_v11_0_grph_enable(crtc, true);

View File

@ -1501,7 +1501,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
amdgpu_bo_get_tiling_flags(abo, &tiling_flags); amdgpu_bo_get_tiling_flags(abo, &tiling_flags);
amdgpu_bo_unreserve(abo); amdgpu_bo_unreserve(abo);
switch (target_fb->pixel_format) { switch (target_fb->format->format) {
case DRM_FORMAT_C8: case DRM_FORMAT_C8:
fb_format = (GRPH_DEPTH(GRPH_DEPTH_8BPP) | fb_format = (GRPH_DEPTH(GRPH_DEPTH_8BPP) |
GRPH_FORMAT(GRPH_FORMAT_INDEXED)); GRPH_FORMAT(GRPH_FORMAT_INDEXED));
@ -1567,7 +1567,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
break; break;
default: default:
DRM_ERROR("Unsupported screen format %s\n", DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name)); drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL; return -EINVAL;
} }
@ -1630,7 +1630,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v6_0_grph_enable(crtc, true); dce_v6_0_grph_enable(crtc, true);

View File

@ -1950,7 +1950,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG); pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) { switch (target_fb->format->format) {
case DRM_FORMAT_C8: case DRM_FORMAT_C8:
fb_format = ((GRPH_DEPTH_8BPP << GRPH_CONTROL__GRPH_DEPTH__SHIFT) | fb_format = ((GRPH_DEPTH_8BPP << GRPH_CONTROL__GRPH_DEPTH__SHIFT) |
(GRPH_FORMAT_INDEXED << GRPH_CONTROL__GRPH_FORMAT__SHIFT)); (GRPH_FORMAT_INDEXED << GRPH_CONTROL__GRPH_FORMAT__SHIFT));
@ -2016,7 +2016,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
break; break;
default: default:
DRM_ERROR("Unsupported screen format %s\n", DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name)); drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL; return -EINVAL;
} }
@ -2079,7 +2079,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width); WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height); WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8); fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels); WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v8_0_grph_enable(crtc, true); dce_v8_0_grph_enable(crtc, true);

View File

@ -35,7 +35,8 @@ static struct simplefb_format supported_formats[] = {
static void arc_pgu_set_pxl_fmt(struct drm_crtc *crtc) static void arc_pgu_set_pxl_fmt(struct drm_crtc *crtc)
{ {
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc); struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
uint32_t pixel_format = crtc->primary->state->fb->pixel_format; const struct drm_framebuffer *fb = crtc->primary->state->fb;
uint32_t pixel_format = fb->format->format;
struct simplefb_format *format = NULL; struct simplefb_format *format = NULL;
int i; int i;

View File

@ -47,10 +47,7 @@ int arcpgu_drm_hdmi_init(struct drm_device *drm, struct device_node *np)
return ret; return ret;
/* Link drm_bridge to encoder */ /* Link drm_bridge to encoder */
bridge->encoder = encoder; ret = drm_bridge_attach(encoder, bridge, NULL);
encoder->bridge = bridge;
ret = drm_bridge_attach(drm, bridge);
if (ret) if (ret)
drm_encoder_cleanup(encoder); drm_encoder_cleanup(encoder);

View File

@ -60,11 +60,12 @@ static int hdlcd_set_pxl_fmt(struct drm_crtc *crtc)
{ {
unsigned int btpp; unsigned int btpp;
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
const struct drm_framebuffer *fb = crtc->primary->state->fb;
uint32_t pixel_format; uint32_t pixel_format;
struct simplefb_format *format = NULL; struct simplefb_format *format = NULL;
int i; int i;
pixel_format = crtc->primary->state->fb->pixel_format; pixel_format = fb->format->format;
for (i = 0; i < ARRAY_SIZE(supported_formats); i++) { for (i = 0; i < ARRAY_SIZE(supported_formats); i++) {
if (supported_formats[i].fourcc == pixel_format) if (supported_formats[i].fourcc == pixel_format)
@ -220,27 +221,28 @@ static int hdlcd_plane_atomic_check(struct drm_plane *plane,
static void hdlcd_plane_atomic_update(struct drm_plane *plane, static void hdlcd_plane_atomic_update(struct drm_plane *plane,
struct drm_plane_state *state) struct drm_plane_state *state)
{ {
struct drm_framebuffer *fb = plane->state->fb;
struct hdlcd_drm_private *hdlcd; struct hdlcd_drm_private *hdlcd;
struct drm_gem_cma_object *gem; struct drm_gem_cma_object *gem;
u32 src_w, src_h, dest_w, dest_h; u32 src_w, src_h, dest_w, dest_h;
dma_addr_t scanout_start; dma_addr_t scanout_start;
if (!plane->state->fb) if (!fb)
return; return;
src_w = plane->state->src_w >> 16; src_w = plane->state->src_w >> 16;
src_h = plane->state->src_h >> 16; src_h = plane->state->src_h >> 16;
dest_w = plane->state->crtc_w; dest_w = plane->state->crtc_w;
dest_h = plane->state->crtc_h; dest_h = plane->state->crtc_h;
gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0); gem = drm_fb_cma_get_gem_obj(fb, 0);
scanout_start = gem->paddr + plane->state->fb->offsets[0] + scanout_start = gem->paddr + fb->offsets[0] +
plane->state->crtc_y * plane->state->fb->pitches[0] + plane->state->crtc_y * fb->pitches[0] +
plane->state->crtc_x * plane->state->crtc_x *
drm_format_plane_cpp(plane->state->fb->pixel_format, 0); fb->format->cpp[0];
hdlcd = plane->dev->dev_private; hdlcd = plane->dev->dev_private;
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, plane->state->fb->pitches[0]); hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, plane->state->fb->pitches[0]); hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, dest_h - 1); hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, dest_h - 1);
hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start); hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start);
} }

View File

@ -112,11 +112,11 @@ static int malidp_de_plane_check(struct drm_plane *plane,
fb = state->fb; fb = state->fb;
ms->format = malidp_hw_get_format_id(&mp->hwdev->map, mp->layer->id, ms->format = malidp_hw_get_format_id(&mp->hwdev->map, mp->layer->id,
fb->pixel_format); fb->format->format);
if (ms->format == MALIDP_INVALID_FORMAT_ID) if (ms->format == MALIDP_INVALID_FORMAT_ID)
return -EINVAL; return -EINVAL;
ms->n_planes = drm_format_num_planes(fb->pixel_format); ms->n_planes = fb->format->num_planes;
for (i = 0; i < ms->n_planes; i++) { for (i = 0; i < ms->n_planes; i++) {
if (!malidp_hw_pitch_valid(mp->hwdev, fb->pitches[i])) { if (!malidp_hw_pitch_valid(mp->hwdev, fb->pitches[i])) {
DRM_DEBUG_KMS("Invalid pitch %u for plane %d\n", DRM_DEBUG_KMS("Invalid pitch %u for plane %d\n",
@ -137,8 +137,8 @@ static int malidp_de_plane_check(struct drm_plane *plane,
/* packed RGB888 / BGR888 can't be rotated or flipped */ /* packed RGB888 / BGR888 can't be rotated or flipped */
if (state->rotation != DRM_ROTATE_0 && if (state->rotation != DRM_ROTATE_0 &&
(state->fb->pixel_format == DRM_FORMAT_RGB888 || (fb->format->format == DRM_FORMAT_RGB888 ||
state->fb->pixel_format == DRM_FORMAT_BGR888)) fb->format->format == DRM_FORMAT_BGR888))
return -EINVAL; return -EINVAL;
ms->rotmem_size = 0; ms->rotmem_size = 0;
@ -147,7 +147,7 @@ static int malidp_de_plane_check(struct drm_plane *plane,
val = mp->hwdev->rotmem_required(mp->hwdev, state->crtc_h, val = mp->hwdev->rotmem_required(mp->hwdev, state->crtc_h,
state->crtc_w, state->crtc_w,
state->fb->pixel_format); fb->format->format);
if (val < 0) if (val < 0)
return val; return val;

View File

@ -169,8 +169,7 @@ void armada_drm_plane_calc_addrs(u32 *addrs, struct drm_framebuffer *fb,
int x, int y) int x, int y)
{ {
u32 addr = drm_fb_obj(fb)->dev_addr; u32 addr = drm_fb_obj(fb)->dev_addr;
u32 pixel_format = fb->pixel_format; int num_planes = fb->format->num_planes;
int num_planes = drm_format_num_planes(pixel_format);
int i; int i;
if (num_planes > 3) if (num_planes > 3)
@ -178,7 +177,7 @@ void armada_drm_plane_calc_addrs(u32 *addrs, struct drm_framebuffer *fb,
for (i = 0; i < num_planes; i++) for (i = 0; i < num_planes; i++)
addrs[i] = addr + fb->offsets[i] + y * fb->pitches[i] + addrs[i] = addr + fb->offsets[i] + y * fb->pitches[i] +
x * drm_format_plane_cpp(pixel_format, i); x * fb->format->cpp[i];
for (; i < 3; i++) for (; i < 3; i++)
addrs[i] = 0; addrs[i] = 0;
} }
@ -191,7 +190,7 @@ static unsigned armada_drm_crtc_calc_fb(struct drm_framebuffer *fb,
unsigned i = 0; unsigned i = 0;
DRM_DEBUG_DRIVER("pitch %u x %d y %d bpp %d\n", DRM_DEBUG_DRIVER("pitch %u x %d y %d bpp %d\n",
pitch, x, y, fb->bits_per_pixel); pitch, x, y, fb->format->cpp[0] * 8);
armada_drm_plane_calc_addrs(addrs, fb, x, y); armada_drm_plane_calc_addrs(addrs, fb, x, y);
@ -1036,7 +1035,7 @@ static int armada_drm_crtc_page_flip(struct drm_crtc *crtc,
int ret; int ret;
/* We don't support changing the pixel format */ /* We don't support changing the pixel format */
if (fb->pixel_format != crtc->primary->fb->pixel_format) if (fb->format != crtc->primary->fb->format)
return -EINVAL; return -EINVAL;
work = kmalloc(sizeof(*work), GFP_KERNEL); work = kmalloc(sizeof(*work), GFP_KERNEL);

View File

@ -81,7 +81,7 @@ struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev,
dfb->mod = config; dfb->mod = config;
dfb->obj = obj; dfb->obj = obj;
drm_helper_mode_fill_fb_struct(&dfb->fb, mode); drm_helper_mode_fill_fb_struct(dev, &dfb->fb, mode);
ret = drm_framebuffer_init(dev, &dfb->fb, &armada_fb_funcs); ret = drm_framebuffer_init(dev, &dfb->fb, &armada_fb_funcs);
if (ret) { if (ret) {

View File

@ -89,11 +89,12 @@ static int armada_fb_create(struct drm_fb_helper *fbh,
info->screen_base = ptr; info->screen_base = ptr;
fbh->fb = &dfb->fb; fbh->fb = &dfb->fb;
drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth); drm_fb_helper_fill_fix(info, dfb->fb.pitches[0],
dfb->fb.format->depth);
drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height); drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height);
DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n", DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n",
dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel, dfb->fb.width, dfb->fb.height, dfb->fb.format->cpp[0] * 8,
(unsigned long long)obj->phys_addr); (unsigned long long)obj->phys_addr);
return 0; return 0;

View File

@ -186,9 +186,9 @@ armada_ovl_plane_update(struct drm_plane *plane, struct drm_crtc *crtc,
armada_drm_plane_calc_addrs(addrs, fb, src_x, src_y); armada_drm_plane_calc_addrs(addrs, fb, src_x, src_y);
pixel_format = fb->pixel_format; pixel_format = fb->format->format;
hsub = drm_format_horz_chroma_subsampling(pixel_format); hsub = drm_format_horz_chroma_subsampling(pixel_format);
num_planes = drm_format_num_planes(pixel_format); num_planes = fb->format->num_planes;
/* /*
* Annoyingly, shifting a YUYV-format image by one pixel * Annoyingly, shifting a YUYV-format image by one pixel

View File

@ -28,6 +28,7 @@
#ifndef __AST_DRV_H__ #ifndef __AST_DRV_H__
#define __AST_DRV_H__ #define __AST_DRV_H__
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/ttm/ttm_bo_api.h> #include <drm/ttm/ttm_bo_api.h>

View File

@ -49,7 +49,7 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
struct drm_gem_object *obj; struct drm_gem_object *obj;
struct ast_bo *bo; struct ast_bo *bo;
int src_offset, dst_offset; int src_offset, dst_offset;
int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8; int bpp = afbdev->afb.base.format->cpp[0];
int ret = -EBUSY; int ret = -EBUSY;
bool unmap = false; bool unmap = false;
bool store_for_later = false; bool store_for_later = false;
@ -237,7 +237,7 @@ static int astfb_create(struct drm_fb_helper *helper,
info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0); info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0);
info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0); info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &afbdev->helper, sizes->fb_width, sizes->fb_height); drm_fb_helper_fill_var(info, &afbdev->helper, sizes->fb_width, sizes->fb_height);
info->screen_base = sysram; info->screen_base = sysram;

View File

@ -314,7 +314,7 @@ int ast_framebuffer_init(struct drm_device *dev,
{ {
int ret; int ret;
drm_helper_mode_fill_fb_struct(&ast_fb->base, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &ast_fb->base, mode_cmd);
ast_fb->obj = obj; ast_fb->obj = obj;
ret = drm_framebuffer_init(dev, &ast_fb->base, &ast_fb_funcs); ret = drm_framebuffer_init(dev, &ast_fb->base, &ast_fb_funcs);
if (ret) { if (ret) {

View File

@ -79,12 +79,13 @@ static bool ast_get_vbios_mode_info(struct drm_crtc *crtc, struct drm_display_mo
struct ast_vbios_mode_info *vbios_mode) struct ast_vbios_mode_info *vbios_mode)
{ {
struct ast_private *ast = crtc->dev->dev_private; struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u32 refresh_rate_index = 0, mode_id, color_index, refresh_rate; u32 refresh_rate_index = 0, mode_id, color_index, refresh_rate;
u32 hborder, vborder; u32 hborder, vborder;
bool check_sync; bool check_sync;
struct ast_vbios_enhtable *best = NULL; struct ast_vbios_enhtable *best = NULL;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
vbios_mode->std_table = &vbios_stdtable[VGAModeIndex]; vbios_mode->std_table = &vbios_stdtable[VGAModeIndex];
color_index = VGAModeIndex - 1; color_index = VGAModeIndex - 1;
@ -207,7 +208,8 @@ static bool ast_get_vbios_mode_info(struct drm_crtc *crtc, struct drm_display_mo
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0x00); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0x00);
if (vbios_mode->enh_table->flags & NewModeInfo) { if (vbios_mode->enh_table->flags & NewModeInfo) {
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0xa8); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0xa8);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92, crtc->primary->fb->bits_per_pixel); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92,
fb->format->cpp[0] * 8);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x93, adjusted_mode->clock / 1000); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x93, adjusted_mode->clock / 1000);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x94, adjusted_mode->crtc_hdisplay); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x94, adjusted_mode->crtc_hdisplay);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x95, adjusted_mode->crtc_hdisplay >> 8); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x95, adjusted_mode->crtc_hdisplay >> 8);
@ -369,10 +371,11 @@ static void ast_set_crtc_reg(struct drm_crtc *crtc, struct drm_display_mode *mod
static void ast_set_offset_reg(struct drm_crtc *crtc) static void ast_set_offset_reg(struct drm_crtc *crtc)
{ {
struct ast_private *ast = crtc->dev->dev_private; struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u16 offset; u16 offset;
offset = crtc->primary->fb->pitches[0] >> 3; offset = fb->pitches[0] >> 3;
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x13, (offset & 0xff)); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x13, (offset & 0xff));
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xb0, (offset >> 8) & 0x3f); ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xb0, (offset >> 8) & 0x3f);
} }
@ -395,9 +398,10 @@ static void ast_set_ext_reg(struct drm_crtc *crtc, struct drm_display_mode *mode
struct ast_vbios_mode_info *vbios_mode) struct ast_vbios_mode_info *vbios_mode)
{ {
struct ast_private *ast = crtc->dev->dev_private; struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u8 jregA0 = 0, jregA3 = 0, jregA8 = 0; u8 jregA0 = 0, jregA3 = 0, jregA8 = 0;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
jregA0 = 0x70; jregA0 = 0x70;
jregA3 = 0x01; jregA3 = 0x01;
@ -452,7 +456,9 @@ static void ast_set_sync_reg(struct drm_device *dev, struct drm_display_mode *mo
static bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode, static bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode) struct ast_vbios_mode_info *vbios_mode)
{ {
switch (crtc->primary->fb->bits_per_pixel) { const struct drm_framebuffer *fb = crtc->primary->fb;
switch (fb->format->cpp[0] * 8) {
case 8: case 8:
break; break;
default: default:

View File

@ -446,7 +446,7 @@ void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer,
return; return;
if (fb) if (fb)
nplanes = drm_format_num_planes(fb->pixel_format); nplanes = fb->format->num_planes;
if (nplanes > layer->max_planes) if (nplanes > layer->max_planes)
return; return;

View File

@ -230,9 +230,7 @@ static int atmel_hlcdc_attach_endpoint(struct drm_device *dev,
of_node_put(np); of_node_put(np);
if (bridge) { if (bridge) {
output->encoder.bridge = bridge; ret = drm_bridge_attach(&output->encoder, bridge, NULL);
bridge->encoder = &output->encoder;
ret = drm_bridge_attach(dev, bridge);
if (!ret) if (!ret)
return 0; return 0;
} }

View File

@ -356,7 +356,7 @@ atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane,
cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL | cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL |
ATMEL_HLCDC_LAYER_ITER; ATMEL_HLCDC_LAYER_ITER;
if (atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format)) if (atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format))
cfg |= ATMEL_HLCDC_LAYER_LAEN; cfg |= ATMEL_HLCDC_LAYER_LAEN;
else else
cfg |= ATMEL_HLCDC_LAYER_GAEN | cfg |= ATMEL_HLCDC_LAYER_GAEN |
@ -386,13 +386,13 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
u32 cfg; u32 cfg;
int ret; int ret;
ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->pixel_format, ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->format->format,
&cfg); &cfg);
if (ret) if (ret)
return; return;
if ((state->base.fb->pixel_format == DRM_FORMAT_YUV422 || if ((state->base.fb->format->format == DRM_FORMAT_YUV422 ||
state->base.fb->pixel_format == DRM_FORMAT_NV61) && state->base.fb->format->format == DRM_FORMAT_NV61) &&
drm_rotation_90_or_270(state->base.rotation)) drm_rotation_90_or_270(state->base.rotation))
cfg |= ATMEL_HLCDC_YUV422ROT; cfg |= ATMEL_HLCDC_YUV422ROT;
@ -405,7 +405,7 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
* Rotation optimization is not working on RGB888 (rotation is still * Rotation optimization is not working on RGB888 (rotation is still
* working but without any optimization). * working but without any optimization).
*/ */
if (state->base.fb->pixel_format == DRM_FORMAT_RGB888) if (state->base.fb->format->format == DRM_FORMAT_RGB888)
cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS; cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS;
else else
cfg = 0; cfg = 0;
@ -514,7 +514,7 @@ atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state)
ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s); ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s);
if (!ovl_s->fb || if (!ovl_s->fb ||
atmel_hlcdc_format_embeds_alpha(ovl_s->fb->pixel_format) || atmel_hlcdc_format_embeds_alpha(ovl_s->fb->format->format) ||
ovl_state->alpha != 255) ovl_state->alpha != 255)
continue; continue;
@ -621,7 +621,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
state->src_w >>= 16; state->src_w >>= 16;
state->src_h >>= 16; state->src_h >>= 16;
state->nplanes = drm_format_num_planes(fb->pixel_format); state->nplanes = fb->format->num_planes;
if (state->nplanes > ATMEL_HLCDC_MAX_PLANES) if (state->nplanes > ATMEL_HLCDC_MAX_PLANES)
return -EINVAL; return -EINVAL;
@ -664,15 +664,15 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * state->src_h, patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * state->src_h,
state->crtc_h); state->crtc_h);
hsub = drm_format_horz_chroma_subsampling(fb->pixel_format); hsub = drm_format_horz_chroma_subsampling(fb->format->format);
vsub = drm_format_vert_chroma_subsampling(fb->pixel_format); vsub = drm_format_vert_chroma_subsampling(fb->format->format);
for (i = 0; i < state->nplanes; i++) { for (i = 0; i < state->nplanes; i++) {
unsigned int offset = 0; unsigned int offset = 0;
int xdiv = i ? hsub : 1; int xdiv = i ? hsub : 1;
int ydiv = i ? vsub : 1; int ydiv = i ? vsub : 1;
state->bpp[i] = drm_format_plane_cpp(fb->pixel_format, i); state->bpp[i] = fb->format->cpp[i];
if (!state->bpp[i]) if (!state->bpp[i])
return -EINVAL; return -EINVAL;
@ -741,7 +741,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) && if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) &&
(!layout->memsize || (!layout->memsize ||
atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format))) atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format)))
return -EINVAL; return -EINVAL;
if (state->crtc_x < 0 || state->crtc_y < 0) if (state->crtc_x < 0 || state->crtc_y < 0)

View File

@ -4,6 +4,7 @@
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h> #include <drm/drm_gem.h>

View File

@ -123,7 +123,7 @@ static int bochsfb_create(struct drm_fb_helper *helper,
info->flags = FBINFO_DEFAULT; info->flags = FBINFO_DEFAULT;
info->fbops = &bochsfb_ops; info->fbops = &bochsfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &bochs->fb.helper, sizes->fb_width, drm_fb_helper_fill_var(info, &bochs->fb.helper, sizes->fb_width,
sizes->fb_height); sizes->fb_height);

View File

@ -484,7 +484,7 @@ int bochs_framebuffer_init(struct drm_device *dev,
{ {
int ret; int ret;
drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd);
gfb->obj = obj; gfb->obj = obj;
ret = drm_framebuffer_init(dev, &gfb->base, &bochs_fb_funcs); ret = drm_framebuffer_init(dev, &gfb->base, &bochs_fb_funcs);
if (ret) { if (ret) {

View File

@ -133,6 +133,7 @@ int analogix_dp_disable_psr(struct device *dev)
{ {
struct analogix_dp_device *dp = dev_get_drvdata(dev); struct analogix_dp_device *dp = dev_get_drvdata(dev);
struct edp_vsc_psr psr_vsc; struct edp_vsc_psr psr_vsc;
int ret;
if (!dp->psr_support) if (!dp->psr_support)
return 0; return 0;
@ -147,6 +148,10 @@ int analogix_dp_disable_psr(struct device *dev)
psr_vsc.DB0 = 0; psr_vsc.DB0 = 0;
psr_vsc.DB1 = 0; psr_vsc.DB1 = 0;
ret = drm_dp_dpcd_writeb(&dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
if (ret != 1)
dev_err(dp->dev, "Failed to set DP Power0 %d\n", ret);
analogix_dp_send_psr_spd(dp, &psr_vsc); analogix_dp_send_psr_spd(dp, &psr_vsc);
return 0; return 0;
} }
@ -1227,12 +1232,10 @@ static int analogix_dp_create_bridge(struct drm_device *drm_dev,
dp->bridge = bridge; dp->bridge = bridge;
dp->encoder->bridge = bridge;
bridge->driver_private = dp; bridge->driver_private = dp;
bridge->encoder = dp->encoder;
bridge->funcs = &analogix_dp_bridge_funcs; bridge->funcs = &analogix_dp_bridge_funcs;
ret = drm_bridge_attach(drm_dev, bridge); ret = drm_bridge_attach(dp->encoder, bridge, NULL);
if (ret) { if (ret) {
DRM_ERROR("failed to attach drm bridge\n"); DRM_ERROR("failed to attach drm bridge\n");
return -EINVAL; return -EINVAL;

View File

@ -237,6 +237,7 @@ static int dumb_vga_remove(struct platform_device *pdev)
static const struct of_device_id dumb_vga_match[] = { static const struct of_device_id dumb_vga_match[] = {
{ .compatible = "dumb-vga-dac" }, { .compatible = "dumb-vga-dac" },
{ .compatible = "ti,ths8135" },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, dumb_vga_match); MODULE_DEVICE_TABLE(of, dumb_vga_match);

View File

@ -1841,13 +1841,12 @@ static int dw_hdmi_register(struct drm_device *drm, struct dw_hdmi *hdmi)
hdmi->bridge = bridge; hdmi->bridge = bridge;
bridge->driver_private = hdmi; bridge->driver_private = hdmi;
bridge->funcs = &dw_hdmi_bridge_funcs; bridge->funcs = &dw_hdmi_bridge_funcs;
ret = drm_bridge_attach(drm, bridge); ret = drm_bridge_attach(encoder, bridge, NULL);
if (ret) { if (ret) {
DRM_ERROR("Failed to initialize bridge with drm\n"); DRM_ERROR("Failed to initialize bridge with drm\n");
return -EINVAL; return -EINVAL;
} }
encoder->bridge = bridge;
hdmi->connector.polled = DRM_CONNECTOR_POLL_HPD; hdmi->connector.polled = DRM_CONNECTOR_POLL_HPD;
drm_connector_helper_add(&hdmi->connector, drm_connector_helper_add(&hdmi->connector,

View File

@ -13,6 +13,7 @@
#include <video/vga.h> #include <video/vga.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/ttm/ttm_bo_api.h> #include <drm/ttm/ttm_bo_api.h>

View File

@ -22,7 +22,7 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
struct drm_gem_object *obj; struct drm_gem_object *obj;
struct cirrus_bo *bo; struct cirrus_bo *bo;
int src_offset, dst_offset; int src_offset, dst_offset;
int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8; int bpp = afbdev->gfb.base.format->cpp[0];
int ret = -EBUSY; int ret = -EBUSY;
bool unmap = false; bool unmap = false;
bool store_for_later = false; bool store_for_later = false;
@ -218,7 +218,7 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
info->flags = FBINFO_DEFAULT; info->flags = FBINFO_DEFAULT;
info->fbops = &cirrusfb_ops; info->fbops = &cirrusfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &gfbdev->helper, sizes->fb_width, drm_fb_helper_fill_var(info, &gfbdev->helper, sizes->fb_width,
sizes->fb_height); sizes->fb_height);
@ -238,7 +238,7 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start); DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start);
DRM_INFO("vram aper at 0x%lX\n", (unsigned long)info->fix.smem_start); DRM_INFO("vram aper at 0x%lX\n", (unsigned long)info->fix.smem_start);
DRM_INFO("size %lu\n", (unsigned long)info->fix.smem_len); DRM_INFO("size %lu\n", (unsigned long)info->fix.smem_len);
DRM_INFO("fb depth is %d\n", fb->depth); DRM_INFO("fb depth is %d\n", fb->format->depth);
DRM_INFO(" pitch is %d\n", fb->pitches[0]); DRM_INFO(" pitch is %d\n", fb->pitches[0]);
return 0; return 0;

View File

@ -34,7 +34,7 @@ int cirrus_framebuffer_init(struct drm_device *dev,
{ {
int ret; int ret;
drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd);
gfb->obj = obj; gfb->obj = obj;
ret = drm_framebuffer_init(dev, &gfb->base, &cirrus_fb_funcs); ret = drm_framebuffer_init(dev, &gfb->base, &cirrus_fb_funcs);
if (ret) { if (ret) {

View File

@ -185,6 +185,7 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
{ {
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct cirrus_device *cdev = dev->dev_private; struct cirrus_device *cdev = dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
int hsyncstart, hsyncend, htotal, hdispend; int hsyncstart, hsyncend, htotal, hdispend;
int vtotal, vdispend; int vtotal, vdispend;
int tmp; int tmp;
@ -257,7 +258,7 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
sr07 = RREG8(SEQ_DATA); sr07 = RREG8(SEQ_DATA);
sr07 &= 0xe0; sr07 &= 0xe0;
hdr = 0; hdr = 0;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
sr07 |= 0x11; sr07 |= 0x11;
break; break;
@ -280,13 +281,13 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
WREG_SEQ(0x7, sr07); WREG_SEQ(0x7, sr07);
/* Program the pitch */ /* Program the pitch */
tmp = crtc->primary->fb->pitches[0] / 8; tmp = fb->pitches[0] / 8;
WREG_CRT(VGA_CRTC_OFFSET, tmp); WREG_CRT(VGA_CRTC_OFFSET, tmp);
/* Enable extended blanking and pitch bits, and enable full memory */ /* Enable extended blanking and pitch bits, and enable full memory */
tmp = 0x22; tmp = 0x22;
tmp |= (crtc->primary->fb->pitches[0] >> 7) & 0x10; tmp |= (fb->pitches[0] >> 7) & 0x10;
tmp |= (crtc->primary->fb->pitches[0] >> 6) & 0x40; tmp |= (fb->pitches[0] >> 6) & 0x40;
WREG_CRT(0x1b, tmp); WREG_CRT(0x1b, tmp);
/* Enable high-colour modes */ /* Enable high-colour modes */

View File

@ -902,11 +902,11 @@ static int drm_atomic_plane_check(struct drm_plane *plane,
} }
/* Check whether this plane supports the fb pixel format. */ /* Check whether this plane supports the fb pixel format. */
ret = drm_plane_check_pixel_format(plane, state->fb->pixel_format); ret = drm_plane_check_pixel_format(plane, state->fb->format->format);
if (ret) { if (ret) {
struct drm_format_name_buf format_name; struct drm_format_name_buf format_name;
DRM_DEBUG_ATOMIC("Invalid pixel format %s\n", DRM_DEBUG_ATOMIC("Invalid pixel format %s\n",
drm_get_format_name(state->fb->pixel_format, drm_get_format_name(state->fb->format->format,
&format_name)); &format_name));
return ret; return ret;
} }
@ -960,11 +960,11 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
drm_printf(p, "\tfb=%u\n", state->fb ? state->fb->base.id : 0); drm_printf(p, "\tfb=%u\n", state->fb ? state->fb->base.id : 0);
if (state->fb) { if (state->fb) {
struct drm_framebuffer *fb = state->fb; struct drm_framebuffer *fb = state->fb;
int i, n = drm_format_num_planes(fb->pixel_format); int i, n = fb->format->num_planes;
struct drm_format_name_buf format_name; struct drm_format_name_buf format_name;
drm_printf(p, "\t\tformat=%s\n", drm_printf(p, "\t\tformat=%s\n",
drm_get_format_name(fb->pixel_format, &format_name)); drm_get_format_name(fb->format->format, &format_name));
drm_printf(p, "\t\t\tmodifier=0x%llx\n", fb->modifier); drm_printf(p, "\t\t\tmodifier=0x%llx\n", fb->modifier);
drm_printf(p, "\t\tsize=%dx%d\n", fb->width, fb->height); drm_printf(p, "\t\tsize=%dx%d\n", fb->width, fb->height);
drm_printf(p, "\t\tlayers:\n"); drm_printf(p, "\t\tlayers:\n");
@ -1417,6 +1417,7 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
struct drm_mode_config *config = &state->dev->mode_config; struct drm_mode_config *config = &state->dev->mode_config;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_state *conn_state; struct drm_connector_state *conn_state;
struct drm_connector_list_iter conn_iter;
int ret; int ret;
ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx); ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx);
@ -1430,14 +1431,18 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
* Changed connectors are already in @state, so only need to look at the * Changed connectors are already in @state, so only need to look at the
* current configuration. * current configuration.
*/ */
drm_for_each_connector(connector, state->dev) { drm_connector_list_iter_get(state->dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->state->crtc != crtc) if (connector->state->crtc != crtc)
continue; continue;
conn_state = drm_atomic_get_connector_state(state, connector); conn_state = drm_atomic_get_connector_state(state, connector);
if (IS_ERR(conn_state)) if (IS_ERR(conn_state)) {
drm_connector_list_iter_put(&conn_iter);
return PTR_ERR(conn_state); return PTR_ERR(conn_state);
}
} }
drm_connector_list_iter_put(&conn_iter);
return 0; return 0;
} }
@ -1692,6 +1697,7 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p)
struct drm_plane *plane; struct drm_plane *plane;
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
if (!drm_core_check_feature(dev, DRIVER_ATOMIC)) if (!drm_core_check_feature(dev, DRIVER_ATOMIC))
return; return;
@ -1702,8 +1708,10 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p)
list_for_each_entry(crtc, &config->crtc_list, head) list_for_each_entry(crtc, &config->crtc_list, head)
drm_atomic_crtc_print_state(p, crtc->state); drm_atomic_crtc_print_state(p, crtc->state);
list_for_each_entry(connector, &config->connector_list, head) drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
drm_atomic_connector_print_state(p, connector->state); drm_atomic_connector_print_state(p, connector->state);
drm_connector_list_iter_put(&conn_iter);
} }
EXPORT_SYMBOL(drm_state_dump); EXPORT_SYMBOL(drm_state_dump);
@ -2195,10 +2203,6 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
goto out; goto out;
if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) { if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) {
/*
* Unlike commit, check_only does not clean up state.
* Below we call drm_atomic_state_put for it.
*/
ret = drm_atomic_check_only(state); ret = drm_atomic_check_only(state);
} else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) { } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {
ret = drm_atomic_nonblocking_commit(state); ret = drm_atomic_nonblocking_commit(state);

View File

@ -94,9 +94,10 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
{ {
struct drm_connector_state *conn_state; struct drm_connector_state *conn_state;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_encoder *encoder; struct drm_encoder *encoder;
unsigned encoder_mask = 0; unsigned encoder_mask = 0;
int i, ret; int i, ret = 0;
/* /*
* First loop, find all newly assigned encoders from the connectors * First loop, find all newly assigned encoders from the connectors
@ -144,7 +145,8 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
* and the crtc is disabled if no encoder is left. This preserves * and the crtc is disabled if no encoder is left. This preserves
* compatibility with the legacy set_config behavior. * compatibility with the legacy set_config behavior.
*/ */
drm_for_each_connector(connector, state->dev) { drm_connector_list_iter_get(state->dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
struct drm_crtc_state *crtc_state; struct drm_crtc_state *crtc_state;
if (drm_atomic_get_existing_connector_state(state, connector)) if (drm_atomic_get_existing_connector_state(state, connector))
@ -160,12 +162,15 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
connector->state->crtc->base.id, connector->state->crtc->base.id,
connector->state->crtc->name, connector->state->crtc->name,
connector->base.id, connector->name); connector->base.id, connector->name);
return -EINVAL; ret = -EINVAL;
goto out;
} }
conn_state = drm_atomic_get_connector_state(state, connector); conn_state = drm_atomic_get_connector_state(state, connector);
if (IS_ERR(conn_state)) if (IS_ERR(conn_state)) {
return PTR_ERR(conn_state); ret = PTR_ERR(conn_state);
goto out;
}
DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] in use on [CRTC:%d:%s], disabling [CONNECTOR:%d:%s]\n", DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] in use on [CRTC:%d:%s], disabling [CONNECTOR:%d:%s]\n",
encoder->base.id, encoder->name, encoder->base.id, encoder->name,
@ -176,19 +181,21 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL); ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
if (ret) if (ret)
return ret; goto out;
if (!crtc_state->connector_mask) { if (!crtc_state->connector_mask) {
ret = drm_atomic_set_mode_prop_for_crtc(crtc_state, ret = drm_atomic_set_mode_prop_for_crtc(crtc_state,
NULL); NULL);
if (ret < 0) if (ret < 0)
return ret; goto out;
crtc_state->active = false; crtc_state->active = false;
} }
} }
out:
drm_connector_list_iter_put(&conn_iter);
return 0; return ret;
} }
static void static void
@ -1057,41 +1064,6 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
} }
EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences); EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences);
/**
* drm_atomic_helper_framebuffer_changed - check if framebuffer has changed
* @dev: DRM device
* @old_state: atomic state object with old state structures
* @crtc: DRM crtc
*
* Checks whether the framebuffer used for this CRTC changes as a result of
* the atomic update. This is useful for drivers which cannot use
* drm_atomic_helper_wait_for_vblanks() and need to reimplement its
* functionality.
*
* Returns:
* true if the framebuffer changed.
*/
bool drm_atomic_helper_framebuffer_changed(struct drm_device *dev,
struct drm_atomic_state *old_state,
struct drm_crtc *crtc)
{
struct drm_plane *plane;
struct drm_plane_state *old_plane_state;
int i;
for_each_plane_in_state(old_state, plane, old_plane_state, i) {
if (plane->state->crtc != crtc &&
old_plane_state->crtc != crtc)
continue;
if (plane->state->fb != old_plane_state->fb)
return true;
}
return false;
}
EXPORT_SYMBOL(drm_atomic_helper_framebuffer_changed);
/** /**
* drm_atomic_helper_wait_for_vblanks - wait for vblank on crtcs * drm_atomic_helper_wait_for_vblanks - wait for vblank on crtcs
* @dev: DRM device * @dev: DRM device
@ -1110,39 +1082,35 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state; struct drm_crtc_state *old_crtc_state;
int i, ret; int i, ret;
unsigned crtc_mask = 0;
/*
* Legacy cursor ioctls are completely unsynced, and userspace
* relies on that (by doing tons of cursor updates).
*/
if (old_state->legacy_cursor_update)
return;
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) { for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
/* No one cares about the old state, so abuse it for tracking struct drm_crtc_state *new_crtc_state = crtc->state;
* and store whether we hold a vblank reference (and should do a
* vblank wait) in the ->enable boolean. */
old_crtc_state->enable = false;
if (!crtc->state->enable) if (!new_crtc_state->active || !new_crtc_state->planes_changed)
continue;
/* Legacy cursor ioctls are completely unsynced, and userspace
* relies on that (by doing tons of cursor updates). */
if (old_state->legacy_cursor_update)
continue;
if (!drm_atomic_helper_framebuffer_changed(dev,
old_state, crtc))
continue; continue;
ret = drm_crtc_vblank_get(crtc); ret = drm_crtc_vblank_get(crtc);
if (ret != 0) if (ret != 0)
continue; continue;
old_crtc_state->enable = true; crtc_mask |= drm_crtc_mask(crtc);
old_crtc_state->last_vblank_count = drm_crtc_vblank_count(crtc); old_state->crtcs[i].last_vblank_count = drm_crtc_vblank_count(crtc);
} }
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) { for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
if (!old_crtc_state->enable) if (!(crtc_mask & drm_crtc_mask(crtc)))
continue; continue;
ret = wait_event_timeout(dev->vblank[i].queue, ret = wait_event_timeout(dev->vblank[i].queue,
old_crtc_state->last_vblank_count != old_state->crtcs[i].last_vblank_count !=
drm_crtc_vblank_count(crtc), drm_crtc_vblank_count(crtc),
msecs_to_jiffies(50)); msecs_to_jiffies(50));
@ -1664,9 +1632,6 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
funcs = plane->helper_private; funcs = plane->helper_private;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
if (funcs->prepare_fb) { if (funcs->prepare_fb) {
ret = funcs->prepare_fb(plane, plane_state); ret = funcs->prepare_fb(plane, plane_state);
if (ret) if (ret)
@ -1683,9 +1648,6 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
if (j >= i) if (j >= i)
continue; continue;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
funcs = plane->helper_private; funcs = plane->helper_private;
if (funcs->cleanup_fb) if (funcs->cleanup_fb)
@ -1952,9 +1914,6 @@ void drm_atomic_helper_cleanup_planes(struct drm_device *dev,
for_each_plane_in_state(old_state, plane, plane_state, i) { for_each_plane_in_state(old_state, plane, plane_state, i) {
const struct drm_plane_helper_funcs *funcs; const struct drm_plane_helper_funcs *funcs;
if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc))
continue;
funcs = plane->helper_private; funcs = plane->helper_private;
if (funcs->cleanup_fb) if (funcs->cleanup_fb)
@ -2442,6 +2401,7 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
{ {
struct drm_atomic_state *state; struct drm_atomic_state *state;
struct drm_connector *conn; struct drm_connector *conn;
struct drm_connector_list_iter conn_iter;
int err; int err;
state = drm_atomic_state_alloc(dev); state = drm_atomic_state_alloc(dev);
@ -2450,7 +2410,8 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
state->acquire_ctx = ctx; state->acquire_ctx = ctx;
drm_for_each_connector(conn, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(conn, &conn_iter) {
struct drm_crtc *crtc = conn->state->crtc; struct drm_crtc *crtc = conn->state->crtc;
struct drm_crtc_state *crtc_state; struct drm_crtc_state *crtc_state;
@ -2468,6 +2429,7 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
err = drm_atomic_commit(state); err = drm_atomic_commit(state);
free: free:
drm_connector_list_iter_put(&conn_iter);
drm_atomic_state_put(state); drm_atomic_state_put(state);
return err; return err;
} }
@ -2840,6 +2802,7 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
struct drm_crtc_state *crtc_state; struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_connector *tmp_connector; struct drm_connector *tmp_connector;
struct drm_connector_list_iter conn_iter;
int ret; int ret;
bool active = false; bool active = false;
int old_mode = connector->dpms; int old_mode = connector->dpms;
@ -2867,7 +2830,8 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex)); WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
drm_for_each_connector(tmp_connector, connector->dev) { drm_connector_list_iter_get(connector->dev, &conn_iter);
drm_for_each_connector_iter(tmp_connector, &conn_iter) {
if (tmp_connector->state->crtc != crtc) if (tmp_connector->state->crtc != crtc)
continue; continue;
@ -2876,6 +2840,7 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
break; break;
} }
} }
drm_connector_list_iter_put(&conn_iter);
crtc_state->active = active; crtc_state->active = active;
ret = drm_atomic_commit(state); ret = drm_atomic_commit(state);
@ -3253,6 +3218,7 @@ drm_atomic_helper_duplicate_state(struct drm_device *dev,
{ {
struct drm_atomic_state *state; struct drm_atomic_state *state;
struct drm_connector *conn; struct drm_connector *conn;
struct drm_connector_list_iter conn_iter;
struct drm_plane *plane; struct drm_plane *plane;
struct drm_crtc *crtc; struct drm_crtc *crtc;
int err = 0; int err = 0;
@ -3283,15 +3249,18 @@ drm_atomic_helper_duplicate_state(struct drm_device *dev,
} }
} }
drm_for_each_connector(conn, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(conn, &conn_iter) {
struct drm_connector_state *conn_state; struct drm_connector_state *conn_state;
conn_state = drm_atomic_get_connector_state(state, conn); conn_state = drm_atomic_get_connector_state(state, conn);
if (IS_ERR(conn_state)) { if (IS_ERR(conn_state)) {
err = PTR_ERR(conn_state); err = PTR_ERR(conn_state);
drm_connector_list_iter_put(&conn_iter);
goto free; goto free;
} }
} }
drm_connector_list_iter_put(&conn_iter);
/* clear the acquire context so that it isn't accidentally reused */ /* clear the acquire context so that it isn't accidentally reused */
state->acquire_ctx = NULL; state->acquire_ctx = NULL;

View File

@ -26,6 +26,9 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <drm/drm_bridge.h> #include <drm/drm_bridge.h>
#include <drm/drm_encoder.h>
#include "drm_crtc_internal.h"
/** /**
* DOC: overview * DOC: overview
@ -92,47 +95,58 @@ void drm_bridge_remove(struct drm_bridge *bridge)
EXPORT_SYMBOL(drm_bridge_remove); EXPORT_SYMBOL(drm_bridge_remove);
/** /**
* drm_bridge_attach - associate given bridge to our DRM device * drm_bridge_attach - attach the bridge to an encoder's chain
* *
* @dev: DRM device * @encoder: DRM encoder
* @bridge: bridge control structure * @bridge: bridge to attach
* @previous: previous bridge in the chain (optional)
* *
* Called by a kms driver to link one of our encoder/bridge to the given * Called by a kms driver to link the bridge to an encoder's chain. The previous
* bridge. * argument specifies the previous bridge in the chain. If NULL, the bridge is
* linked directly at the encoder's output. Otherwise it is linked at the
* previous bridge's output.
* *
* Note that setting up links between the bridge and our encoder/bridge * If non-NULL the previous bridge must be already attached by a call to this
* objects needs to be handled by the kms driver itself. * function.
* *
* RETURNS: * RETURNS:
* Zero on success, error code on failure * Zero on success, error code on failure
*/ */
int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge) int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
struct drm_bridge *previous)
{ {
if (!dev || !bridge) int ret;
if (!encoder || !bridge)
return -EINVAL;
if (previous && (!previous->dev || previous->encoder != encoder))
return -EINVAL; return -EINVAL;
if (bridge->dev) if (bridge->dev)
return -EBUSY; return -EBUSY;
bridge->dev = dev; bridge->dev = encoder->dev;
bridge->encoder = encoder;
if (bridge->funcs->attach) if (bridge->funcs->attach) {
return bridge->funcs->attach(bridge); ret = bridge->funcs->attach(bridge);
if (ret < 0) {
bridge->dev = NULL;
bridge->encoder = NULL;
return ret;
}
}
if (previous)
previous->next = bridge;
else
encoder->bridge = bridge;
return 0; return 0;
} }
EXPORT_SYMBOL(drm_bridge_attach); EXPORT_SYMBOL(drm_bridge_attach);
/**
* drm_bridge_detach - deassociate given bridge from its DRM device
*
* @bridge: bridge control structure
*
* Called by a kms driver to unlink the given bridge from its DRM device.
*
* Note that tearing down links between the bridge and our encoder/bridge
* objects needs to be handled by the kms driver itself.
*/
void drm_bridge_detach(struct drm_bridge *bridge) void drm_bridge_detach(struct drm_bridge *bridge)
{ {
if (WARN_ON(!bridge)) if (WARN_ON(!bridge))
@ -146,7 +160,6 @@ void drm_bridge_detach(struct drm_bridge *bridge)
bridge->dev = NULL; bridge->dev = NULL;
} }
EXPORT_SYMBOL(drm_bridge_detach);
/** /**
* DOC: bridge callbacks * DOC: bridge callbacks

View File

@ -23,6 +23,7 @@
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_connector.h> #include <drm/drm_connector.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include "drm_crtc_internal.h" #include "drm_crtc_internal.h"
#include "drm_internal.h" #include "drm_internal.h"
@ -189,13 +190,11 @@ int drm_connector_init(struct drm_device *dev,
struct ida *connector_ida = struct ida *connector_ida =
&drm_connector_enum_list[connector_type].ida; &drm_connector_enum_list[connector_type].ida;
drm_modeset_lock_all(dev);
ret = drm_mode_object_get_reg(dev, &connector->base, ret = drm_mode_object_get_reg(dev, &connector->base,
DRM_MODE_OBJECT_CONNECTOR, DRM_MODE_OBJECT_CONNECTOR,
false, drm_connector_free); false, drm_connector_free);
if (ret) if (ret)
goto out_unlock; return ret;
connector->base.properties = &connector->properties; connector->base.properties = &connector->properties;
connector->dev = dev; connector->dev = dev;
@ -225,6 +224,7 @@ int drm_connector_init(struct drm_device *dev,
INIT_LIST_HEAD(&connector->probed_modes); INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes); INIT_LIST_HEAD(&connector->modes);
mutex_init(&connector->mutex);
connector->edid_blob_ptr = NULL; connector->edid_blob_ptr = NULL;
connector->status = connector_status_unknown; connector->status = connector_status_unknown;
@ -232,8 +232,10 @@ int drm_connector_init(struct drm_device *dev,
/* We should add connectors at the end to avoid upsetting the connector /* We should add connectors at the end to avoid upsetting the connector
* index too much. */ * index too much. */
spin_lock_irq(&config->connector_list_lock);
list_add_tail(&connector->head, &config->connector_list); list_add_tail(&connector->head, &config->connector_list);
config->num_connector++; config->num_connector++;
spin_unlock_irq(&config->connector_list_lock);
if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL) if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL)
drm_object_attach_property(&connector->base, drm_object_attach_property(&connector->base,
@ -258,9 +260,6 @@ int drm_connector_init(struct drm_device *dev,
if (ret) if (ret)
drm_mode_object_unregister(dev, &connector->base); drm_mode_object_unregister(dev, &connector->base);
out_unlock:
drm_modeset_unlock_all(dev);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_connector_init); EXPORT_SYMBOL(drm_connector_init);
@ -351,14 +350,18 @@ void drm_connector_cleanup(struct drm_connector *connector)
drm_mode_object_unregister(dev, &connector->base); drm_mode_object_unregister(dev, &connector->base);
kfree(connector->name); kfree(connector->name);
connector->name = NULL; connector->name = NULL;
spin_lock_irq(&dev->mode_config.connector_list_lock);
list_del(&connector->head); list_del(&connector->head);
dev->mode_config.num_connector--; dev->mode_config.num_connector--;
spin_unlock_irq(&dev->mode_config.connector_list_lock);
WARN_ON(connector->state && !connector->funcs->atomic_destroy_state); WARN_ON(connector->state && !connector->funcs->atomic_destroy_state);
if (connector->state && connector->funcs->atomic_destroy_state) if (connector->state && connector->funcs->atomic_destroy_state)
connector->funcs->atomic_destroy_state(connector, connector->funcs->atomic_destroy_state(connector,
connector->state); connector->state);
mutex_destroy(&connector->mutex);
memset(connector, 0, sizeof(*connector)); memset(connector, 0, sizeof(*connector));
} }
EXPORT_SYMBOL(drm_connector_cleanup); EXPORT_SYMBOL(drm_connector_cleanup);
@ -374,14 +377,15 @@ EXPORT_SYMBOL(drm_connector_cleanup);
*/ */
int drm_connector_register(struct drm_connector *connector) int drm_connector_register(struct drm_connector *connector)
{ {
int ret; int ret = 0;
mutex_lock(&connector->mutex);
if (connector->registered) if (connector->registered)
return 0; goto unlock;
ret = drm_sysfs_connector_add(connector); ret = drm_sysfs_connector_add(connector);
if (ret) if (ret)
return ret; goto unlock;
ret = drm_debugfs_connector_add(connector); ret = drm_debugfs_connector_add(connector);
if (ret) { if (ret) {
@ -397,12 +401,14 @@ int drm_connector_register(struct drm_connector *connector)
drm_mode_object_register(connector->dev, &connector->base); drm_mode_object_register(connector->dev, &connector->base);
connector->registered = true; connector->registered = true;
return 0; goto unlock;
err_debugfs: err_debugfs:
drm_debugfs_connector_remove(connector); drm_debugfs_connector_remove(connector);
err_sysfs: err_sysfs:
drm_sysfs_connector_remove(connector); drm_sysfs_connector_remove(connector);
unlock:
mutex_unlock(&connector->mutex);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_connector_register); EXPORT_SYMBOL(drm_connector_register);
@ -415,8 +421,11 @@ EXPORT_SYMBOL(drm_connector_register);
*/ */
void drm_connector_unregister(struct drm_connector *connector) void drm_connector_unregister(struct drm_connector *connector)
{ {
if (!connector->registered) mutex_lock(&connector->mutex);
if (!connector->registered) {
mutex_unlock(&connector->mutex);
return; return;
}
if (connector->funcs->early_unregister) if (connector->funcs->early_unregister)
connector->funcs->early_unregister(connector); connector->funcs->early_unregister(connector);
@ -425,36 +434,37 @@ void drm_connector_unregister(struct drm_connector *connector)
drm_debugfs_connector_remove(connector); drm_debugfs_connector_remove(connector);
connector->registered = false; connector->registered = false;
mutex_unlock(&connector->mutex);
} }
EXPORT_SYMBOL(drm_connector_unregister); EXPORT_SYMBOL(drm_connector_unregister);
void drm_connector_unregister_all(struct drm_device *dev) void drm_connector_unregister_all(struct drm_device *dev)
{ {
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
/* FIXME: taking the mode config mutex ends up in a clash with sysfs */ drm_connector_list_iter_get(dev, &conn_iter);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) drm_for_each_connector_iter(connector, &conn_iter)
drm_connector_unregister(connector); drm_connector_unregister(connector);
drm_connector_list_iter_put(&conn_iter);
} }
int drm_connector_register_all(struct drm_device *dev) int drm_connector_register_all(struct drm_device *dev)
{ {
struct drm_connector *connector; struct drm_connector *connector;
int ret; struct drm_connector_list_iter conn_iter;
int ret = 0;
/* FIXME: taking the mode config mutex ends up in a clash with drm_connector_list_iter_get(dev, &conn_iter);
* fbcon/backlight registration */ drm_for_each_connector_iter(connector, &conn_iter) {
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
ret = drm_connector_register(connector); ret = drm_connector_register(connector);
if (ret) if (ret)
goto err; break;
} }
drm_connector_list_iter_put(&conn_iter);
return 0; if (ret)
drm_connector_unregister_all(dev);
err:
mutex_unlock(&dev->mode_config.mutex);
drm_connector_unregister_all(dev);
return ret; return ret;
} }
@ -476,6 +486,87 @@ const char *drm_get_connector_status_name(enum drm_connector_status status)
} }
EXPORT_SYMBOL(drm_get_connector_status_name); EXPORT_SYMBOL(drm_get_connector_status_name);
#ifdef CONFIG_LOCKDEP
static struct lockdep_map connector_list_iter_dep_map = {
.name = "drm_connector_list_iter"
};
#endif
/**
* drm_connector_list_iter_get - initialize a connector_list iterator
* @dev: DRM device
* @iter: connector_list iterator
*
* Sets @iter up to walk the connector list in &drm_mode_config of @dev. @iter
* must always be cleaned up again by calling drm_connector_list_iter_put().
* Iteration itself happens using drm_connector_list_iter_next() or
* drm_for_each_connector_iter().
*/
void drm_connector_list_iter_get(struct drm_device *dev,
struct drm_connector_list_iter *iter)
{
iter->dev = dev;
iter->conn = NULL;
lock_acquire_shared_recursive(&connector_list_iter_dep_map, 0, 1, NULL, _RET_IP_);
}
EXPORT_SYMBOL(drm_connector_list_iter_get);
/**
* drm_connector_list_iter_next - return next connector
* @iter: connectr_list iterator
*
* Returns the next connector for @iter, or NULL when the list walk has
* completed.
*/
struct drm_connector *
drm_connector_list_iter_next(struct drm_connector_list_iter *iter)
{
struct drm_connector *old_conn = iter->conn;
struct drm_mode_config *config = &iter->dev->mode_config;
struct list_head *lhead;
unsigned long flags;
spin_lock_irqsave(&config->connector_list_lock, flags);
lhead = old_conn ? &old_conn->head : &config->connector_list;
do {
if (lhead->next == &config->connector_list) {
iter->conn = NULL;
break;
}
lhead = lhead->next;
iter->conn = list_entry(lhead, struct drm_connector, head);
/* loop until it's not a zombie connector */
} while (!kref_get_unless_zero(&iter->conn->base.refcount));
spin_unlock_irqrestore(&config->connector_list_lock, flags);
if (old_conn)
drm_connector_unreference(old_conn);
return iter->conn;
}
EXPORT_SYMBOL(drm_connector_list_iter_next);
/**
* drm_connector_list_iter_put - tear down a connector_list iterator
* @iter: connector_list iterator
*
* Tears down @iter and releases any resources (like &drm_connector references)
* acquired while walking the list. This must always be called, both when the
* iteration completes fully or when it was aborted without walking the entire
* list.
*/
void drm_connector_list_iter_put(struct drm_connector_list_iter *iter)
{
iter->dev = NULL;
if (iter->conn)
drm_connector_unreference(iter->conn);
lock_release(&connector_list_iter_dep_map, 0, _RET_IP_);
}
EXPORT_SYMBOL(drm_connector_list_iter_put);
static const struct drm_prop_enum_list drm_subpixel_enum_list[] = { static const struct drm_prop_enum_list drm_subpixel_enum_list[] = {
{ SubPixelUnknown, "Unknown" }, { SubPixelUnknown, "Unknown" },
{ SubPixelHorizontalRGB, "Horizontal RGB" }, { SubPixelHorizontalRGB, "Horizontal RGB" },
@ -1072,36 +1163,9 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
memset(&u_mode, 0, sizeof(struct drm_mode_modeinfo)); memset(&u_mode, 0, sizeof(struct drm_mode_modeinfo));
mutex_lock(&dev->mode_config.mutex);
connector = drm_connector_lookup(dev, out_resp->connector_id); connector = drm_connector_lookup(dev, out_resp->connector_id);
if (!connector) { if (!connector)
ret = -ENOENT; return -ENOENT;
goto out_unlock;
}
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++)
if (connector->encoder_ids[i] != 0)
encoders_count++;
if (out_resp->count_modes == 0) {
connector->funcs->fill_modes(connector,
dev->mode_config.max_width,
dev->mode_config.max_height);
}
/* delayed so we get modes regardless of pre-fill_modes state */
list_for_each_entry(mode, &connector->modes, head)
if (drm_mode_expose_to_userspace(mode, file_priv))
mode_count++;
out_resp->connector_id = connector->base.id;
out_resp->connector_type = connector->connector_type;
out_resp->connector_type_id = connector->connector_type_id;
out_resp->mm_width = connector->display_info.width_mm;
out_resp->mm_height = connector->display_info.height_mm;
out_resp->subpixel = connector->display_info.subpixel_order;
out_resp->connection = connector->status;
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
encoder = drm_connector_get_encoder(connector); encoder = drm_connector_get_encoder(connector);
@ -1110,6 +1174,55 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
else else
out_resp->encoder_id = 0; out_resp->encoder_id = 0;
ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic,
(uint32_t __user *)(unsigned long)(out_resp->props_ptr),
(uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr),
&out_resp->count_props);
drm_modeset_unlock(&dev->mode_config.connection_mutex);
if (ret)
goto out_unref;
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++)
if (connector->encoder_ids[i] != 0)
encoders_count++;
if ((out_resp->count_encoders >= encoders_count) && encoders_count) {
copied = 0;
encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr);
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
if (connector->encoder_ids[i] != 0) {
if (put_user(connector->encoder_ids[i],
encoder_ptr + copied)) {
ret = -EFAULT;
goto out_unref;
}
copied++;
}
}
}
out_resp->count_encoders = encoders_count;
out_resp->connector_id = connector->base.id;
out_resp->connector_type = connector->connector_type;
out_resp->connector_type_id = connector->connector_type_id;
mutex_lock(&dev->mode_config.mutex);
if (out_resp->count_modes == 0) {
connector->funcs->fill_modes(connector,
dev->mode_config.max_width,
dev->mode_config.max_height);
}
out_resp->mm_width = connector->display_info.width_mm;
out_resp->mm_height = connector->display_info.height_mm;
out_resp->subpixel = connector->display_info.subpixel_order;
out_resp->connection = connector->status;
/* delayed so we get modes regardless of pre-fill_modes state */
list_for_each_entry(mode, &connector->modes, head)
if (drm_mode_expose_to_userspace(mode, file_priv))
mode_count++;
/* /*
* This ioctl is called twice, once to determine how much space is * This ioctl is called twice, once to determine how much space is
* needed, and the 2nd time to fill it. * needed, and the 2nd time to fill it.
@ -1131,36 +1244,10 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
} }
} }
out_resp->count_modes = mode_count; out_resp->count_modes = mode_count;
ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic,
(uint32_t __user *)(unsigned long)(out_resp->props_ptr),
(uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr),
&out_resp->count_props);
if (ret)
goto out;
if ((out_resp->count_encoders >= encoders_count) && encoders_count) {
copied = 0;
encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr);
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
if (connector->encoder_ids[i] != 0) {
if (put_user(connector->encoder_ids[i],
encoder_ptr + copied)) {
ret = -EFAULT;
goto out;
}
copied++;
}
}
}
out_resp->count_encoders = encoders_count;
out: out:
drm_modeset_unlock(&dev->mode_config.connection_mutex);
drm_connector_unreference(connector);
out_unlock:
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
out_unref:
drm_connector_unreference(connector);
return ret; return ret;
} }

View File

@ -357,7 +357,10 @@ int drm_mode_getcrtc(struct drm_device *dev,
drm_modeset_lock_crtc(crtc, crtc->primary); drm_modeset_lock_crtc(crtc, crtc->primary);
crtc_resp->gamma_size = crtc->gamma_size; crtc_resp->gamma_size = crtc->gamma_size;
if (crtc->primary->fb)
if (crtc->primary->state && crtc->primary->state->fb)
crtc_resp->fb_id = crtc->primary->state->fb->base.id;
else if (!crtc->primary->state && crtc->primary->fb)
crtc_resp->fb_id = crtc->primary->fb->base.id; crtc_resp->fb_id = crtc->primary->fb->base.id;
else else
crtc_resp->fb_id = 0; crtc_resp->fb_id = 0;
@ -572,11 +575,11 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
*/ */
if (!crtc->primary->format_default) { if (!crtc->primary->format_default) {
ret = drm_plane_check_pixel_format(crtc->primary, ret = drm_plane_check_pixel_format(crtc->primary,
fb->pixel_format); fb->format->format);
if (ret) { if (ret) {
struct drm_format_name_buf format_name; struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("Invalid pixel format %s\n", DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format, drm_get_format_name(fb->format->format,
&format_name)); &format_name));
goto out; goto out;
} }

View File

@ -36,6 +36,7 @@
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
@ -88,6 +89,7 @@
bool drm_helper_encoder_in_use(struct drm_encoder *encoder) bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
{ {
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = encoder->dev; struct drm_device *dev = encoder->dev;
/* /*
@ -99,9 +101,15 @@ bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
} }
drm_for_each_connector(connector, dev)
if (connector->encoder == encoder) drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder == encoder) {
drm_connector_list_iter_put(&conn_iter);
return true; return true;
}
}
drm_connector_list_iter_put(&conn_iter);
return false; return false;
} }
EXPORT_SYMBOL(drm_helper_encoder_in_use); EXPORT_SYMBOL(drm_helper_encoder_in_use);
@ -436,10 +444,13 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
/* Decouple all encoders and their attached connectors from this crtc */ /* Decouple all encoders and their attached connectors from this crtc */
drm_for_each_encoder(encoder, dev) { drm_for_each_encoder(encoder, dev) {
struct drm_connector_list_iter conn_iter;
if (encoder->crtc != crtc) if (encoder->crtc != crtc)
continue; continue;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder != encoder) if (connector->encoder != encoder)
continue; continue;
@ -456,6 +467,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
/* we keep a reference while the encoder is bound */ /* we keep a reference while the encoder is bound */
drm_connector_unreference(connector); drm_connector_unreference(connector);
} }
drm_connector_list_iter_put(&conn_iter);
} }
__drm_helper_disable_unused_functions(dev); __drm_helper_disable_unused_functions(dev);
@ -507,6 +519,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
bool mode_changed = false; /* if true do a full mode set */ bool mode_changed = false; /* if true do a full mode set */
bool fb_changed = false; /* if true and !mode_changed just do a flip */ bool fb_changed = false; /* if true and !mode_changed just do a flip */
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
int count = 0, ro, fail = 0; int count = 0, ro, fail = 0;
const struct drm_crtc_helper_funcs *crtc_funcs; const struct drm_crtc_helper_funcs *crtc_funcs;
struct drm_mode_set save_set; struct drm_mode_set save_set;
@ -571,9 +584,10 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
} }
count = 0; count = 0;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
save_connector_encoders[count++] = connector->encoder; save_connector_encoders[count++] = connector->encoder;
} drm_connector_list_iter_put(&conn_iter);
save_set.crtc = set->crtc; save_set.crtc = set->crtc;
save_set.mode = &set->crtc->mode; save_set.mode = &set->crtc->mode;
@ -588,8 +602,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
if (set->crtc->primary->fb == NULL) { if (set->crtc->primary->fb == NULL) {
DRM_DEBUG_KMS("crtc has no fb, full mode set\n"); DRM_DEBUG_KMS("crtc has no fb, full mode set\n");
mode_changed = true; mode_changed = true;
} else if (set->fb->pixel_format != } else if (set->fb->format != set->crtc->primary->fb->format) {
set->crtc->primary->fb->pixel_format) {
mode_changed = true; mode_changed = true;
} else } else
fb_changed = true; fb_changed = true;
@ -616,7 +629,8 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
/* a) traverse passed in connector list and get encoders for them */ /* a) traverse passed in connector list and get encoders for them */
count = 0; count = 0;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
const struct drm_connector_helper_funcs *connector_funcs = const struct drm_connector_helper_funcs *connector_funcs =
connector->helper_private; connector->helper_private;
new_encoder = connector->encoder; new_encoder = connector->encoder;
@ -649,6 +663,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
connector->encoder = new_encoder; connector->encoder = new_encoder;
} }
} }
drm_connector_list_iter_put(&conn_iter);
if (fail) { if (fail) {
ret = -EINVAL; ret = -EINVAL;
@ -656,7 +671,8 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
} }
count = 0; count = 0;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (!connector->encoder) if (!connector->encoder)
continue; continue;
@ -674,6 +690,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
if (new_crtc && if (new_crtc &&
!drm_encoder_crtc_ok(connector->encoder, new_crtc)) { !drm_encoder_crtc_ok(connector->encoder, new_crtc)) {
ret = -EINVAL; ret = -EINVAL;
drm_connector_list_iter_put(&conn_iter);
goto fail; goto fail;
} }
if (new_crtc != connector->encoder->crtc) { if (new_crtc != connector->encoder->crtc) {
@ -690,6 +707,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
connector->base.id, connector->name); connector->base.id, connector->name);
} }
} }
drm_connector_list_iter_put(&conn_iter);
/* mode_set_base is not a required function */ /* mode_set_base is not a required function */
if (fb_changed && !crtc_funcs->mode_set_base) if (fb_changed && !crtc_funcs->mode_set_base)
@ -744,9 +762,10 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
} }
count = 0; count = 0;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
connector->encoder = save_connector_encoders[count++]; connector->encoder = save_connector_encoders[count++];
} drm_connector_list_iter_put(&conn_iter);
/* after fail drop reference on all unbound connectors in set, let /* after fail drop reference on all unbound connectors in set, let
* bound connectors keep their reference * bound connectors keep their reference
@ -773,12 +792,16 @@ static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder)
{ {
int dpms = DRM_MODE_DPMS_OFF; int dpms = DRM_MODE_DPMS_OFF;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = encoder->dev; struct drm_device *dev = encoder->dev;
drm_for_each_connector(connector, dev) drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
if (connector->encoder == encoder) if (connector->encoder == encoder)
if (connector->dpms < dpms) if (connector->dpms < dpms)
dpms = connector->dpms; dpms = connector->dpms;
drm_connector_list_iter_put(&conn_iter);
return dpms; return dpms;
} }
@ -810,12 +833,16 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
{ {
int dpms = DRM_MODE_DPMS_OFF; int dpms = DRM_MODE_DPMS_OFF;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
drm_for_each_connector(connector, dev) drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
if (connector->encoder && connector->encoder->crtc == crtc) if (connector->encoder && connector->encoder->crtc == crtc)
if (connector->dpms < dpms) if (connector->dpms < dpms)
dpms = connector->dpms; dpms = connector->dpms;
drm_connector_list_iter_put(&conn_iter);
return dpms; return dpms;
} }

View File

@ -174,6 +174,12 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv); void *data, struct drm_file *file_priv);
/* drm_atomic.c */ /* drm_atomic.c */
#ifdef CONFIG_DEBUG_FS
struct drm_minor;
int drm_atomic_debugfs_init(struct drm_minor *minor);
int drm_atomic_debugfs_cleanup(struct drm_minor *minor);
#endif
int drm_atomic_get_property(struct drm_mode_object *obj, int drm_atomic_get_property(struct drm_mode_object *obj,
struct drm_property *property, uint64_t *val); struct drm_property *property, uint64_t *val);
int drm_mode_atomic_ioctl(struct drm_device *dev, int drm_mode_atomic_ioctl(struct drm_device *dev,
@ -186,6 +192,9 @@ void drm_plane_unregister_all(struct drm_device *dev);
int drm_plane_check_pixel_format(const struct drm_plane *plane, int drm_plane_check_pixel_format(const struct drm_plane *plane,
u32 format); u32 format);
/* drm_bridge.c */
void drm_bridge_detach(struct drm_bridge *bridge);
/* IOCTL */ /* IOCTL */
int drm_mode_getplane_res(struct drm_device *dev, void *data, int drm_mode_getplane_res(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);

View File

@ -38,6 +38,7 @@
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include "drm_internal.h" #include "drm_internal.h"
#include "drm_crtc_internal.h"
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)

View File

@ -323,9 +323,8 @@ void drm_minor_release(struct drm_minor *minor)
* historical baggage. Hence use the reference counting provided by * historical baggage. Hence use the reference counting provided by
* drm_dev_ref() and drm_dev_unref() only carefully. * drm_dev_ref() and drm_dev_unref() only carefully.
* *
* Also note that embedding of &drm_device is currently not (yet) supported (but * It is recommended that drivers embed struct &drm_device into their own device
* it would be easy to add). Drivers can store driver-private data in the * structure, which is supported through drm_dev_init().
* dev_priv field of &drm_device.
*/ */
/** /**
@ -462,7 +461,11 @@ static void drm_fs_inode_free(struct inode *inode)
* Note that for purely virtual devices @parent can be NULL. * Note that for purely virtual devices @parent can be NULL.
* *
* Drivers that do not want to allocate their own device struct * Drivers that do not want to allocate their own device struct
* embedding struct &drm_device can call drm_dev_alloc() instead. * embedding struct &drm_device can call drm_dev_alloc() instead. For drivers
* that do embed struct &drm_device it must be placed first in the overall
* structure, and the overall structure must be allocated using kmalloc(): The
* drm core's release function unconditionally calls kfree() on the @dev pointer
* when the final reference is released.
* *
* RETURNS: * RETURNS:
* 0 on success, or error code on failure. * 0 on success, or error code on failure.

View File

@ -35,6 +35,7 @@
#include <linux/vga_switcheroo.h> #include <linux/vga_switcheroo.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include <drm/drm_displayid.h> #include <drm/drm_displayid.h>
#define version_greater(edid, maj, min) \ #define version_greater(edid, maj, min) \

View File

@ -159,6 +159,17 @@ void drm_encoder_cleanup(struct drm_encoder *encoder)
* the indices on the drm_encoder after us in the encoder_list. * the indices on the drm_encoder after us in the encoder_list.
*/ */
if (encoder->bridge) {
struct drm_bridge *bridge = encoder->bridge;
struct drm_bridge *next;
while (bridge) {
next = bridge->next;
drm_bridge_detach(bridge);
bridge = next;
}
}
drm_mode_object_unregister(dev, &encoder->base); drm_mode_object_unregister(dev, &encoder->base);
kfree(encoder->name); kfree(encoder->name);
list_del(&encoder->head); list_del(&encoder->head);
@ -173,10 +184,12 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
struct drm_connector *connector; struct drm_connector *connector;
struct drm_device *dev = encoder->dev; struct drm_device *dev = encoder->dev;
bool uses_atomic = false; bool uses_atomic = false;
struct drm_connector_list_iter conn_iter;
/* For atomic drivers only state objects are synchronously updated and /* For atomic drivers only state objects are synchronously updated and
* protected by modeset locks, so check those first. */ * protected by modeset locks, so check those first. */
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (!connector->state) if (!connector->state)
continue; continue;
@ -185,8 +198,10 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
if (connector->state->best_encoder != encoder) if (connector->state->best_encoder != encoder)
continue; continue;
drm_connector_list_iter_put(&conn_iter);
return connector->state->crtc; return connector->state->crtc;
} }
drm_connector_list_iter_put(&conn_iter);
/* Don't return stale data (e.g. pending async disable). */ /* Don't return stale data (e.g. pending async disable). */
if (uses_atomic) if (uses_atomic)

View File

@ -147,7 +147,7 @@ static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev,
if (!fb_cma) if (!fb_cma)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
drm_helper_mode_fill_fb_struct(&fb_cma->fb, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &fb_cma->fb, mode_cmd);
for (i = 0; i < num_planes; i++) for (i = 0; i < num_planes; i++)
fb_cma->obj[i] = obj[i]; fb_cma->obj[i] = obj[i];
@ -304,15 +304,12 @@ EXPORT_SYMBOL_GPL(drm_fb_cma_prepare_fb);
static void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m) static void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m)
{ {
struct drm_fb_cma *fb_cma = to_fb_cma(fb); struct drm_fb_cma *fb_cma = to_fb_cma(fb);
const struct drm_format_info *info;
int i; int i;
seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height, seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height,
(char *)&fb->pixel_format); (char *)&fb->format->format);
info = drm_format_info(fb->pixel_format); for (i = 0; i < fb->format->num_planes; i++) {
for (i = 0; i < info->num_planes; i++) {
seq_printf(m, " %d: offset=%d pitch=%d, obj: ", seq_printf(m, " %d: offset=%d pitch=%d, obj: ",
i, fb->offsets[i], fb->pitches[i]); i, fb->offsets[i], fb->pitches[i]);
drm_gem_cma_describe(fb_cma->obj[i], m); drm_gem_cma_describe(fb_cma->obj[i], m);
@ -467,7 +464,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
fbi->flags = FBINFO_FLAG_DEFAULT; fbi->flags = FBINFO_FLAG_DEFAULT;
fbi->fbops = &drm_fbdev_cma_ops; fbi->fbops = &drm_fbdev_cma_ops;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
offset = fbi->var.xoffset * bytes_per_pixel; offset = fbi->var.xoffset * bytes_per_pixel;

View File

@ -120,20 +120,22 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
{ {
struct drm_device *dev = fb_helper->dev; struct drm_device *dev = fb_helper->dev;
struct drm_connector *connector; struct drm_connector *connector;
int i, ret; struct drm_connector_list_iter conn_iter;
int i, ret = 0;
if (!drm_fbdev_emulation) if (!drm_fbdev_emulation)
return 0; return 0;
mutex_lock(&dev->mode_config.mutex); mutex_lock(&dev->mode_config.mutex);
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
ret = drm_fb_helper_add_one_connector(fb_helper, connector); ret = drm_fb_helper_add_one_connector(fb_helper, connector);
if (ret) if (ret)
goto fail; goto fail;
} }
mutex_unlock(&dev->mode_config.mutex); goto out;
return 0;
fail: fail:
drm_fb_helper_for_each_connector(fb_helper, i) { drm_fb_helper_for_each_connector(fb_helper, i) {
struct drm_fb_helper_connector *fb_helper_connector = struct drm_fb_helper_connector *fb_helper_connector =
@ -145,6 +147,8 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
fb_helper->connector_info[i] = NULL; fb_helper->connector_info[i] = NULL;
} }
fb_helper->connector_count = 0; fb_helper->connector_count = 0;
out:
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
return ret; return ret;
@ -401,7 +405,7 @@ static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
drm_warn_on_modeset_not_all_locked(dev); drm_warn_on_modeset_not_all_locked(dev);
if (dev->mode_config.funcs->atomic_commit) if (drm_drv_uses_atomic_modeset(dev))
return restore_fbdev_mode_atomic(fb_helper); return restore_fbdev_mode_atomic(fb_helper);
drm_for_each_plane(plane, dev) { drm_for_each_plane(plane, dev) {
@ -1169,7 +1173,7 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
!fb_helper->funcs->gamma_get)) !fb_helper->funcs->gamma_get))
return -EINVAL; return -EINVAL;
WARN_ON(fb->bits_per_pixel != 8); WARN_ON(fb->format->cpp[0] != 1);
fb_helper->funcs->gamma_set(crtc, red, green, blue, regno); fb_helper->funcs->gamma_set(crtc, red, green, blue, regno);
@ -1252,14 +1256,14 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
* Changes struct fb_var_screeninfo are currently not pushed back * Changes struct fb_var_screeninfo are currently not pushed back
* to KMS, hence fail if different settings are requested. * to KMS, hence fail if different settings are requested.
*/ */
if (var->bits_per_pixel != fb->bits_per_pixel || if (var->bits_per_pixel != fb->format->cpp[0] * 8 ||
var->xres != fb->width || var->yres != fb->height || var->xres != fb->width || var->yres != fb->height ||
var->xres_virtual != fb->width || var->yres_virtual != fb->height) { var->xres_virtual != fb->width || var->yres_virtual != fb->height) {
DRM_DEBUG("fb userspace requested width/height/bpp different than current fb " DRM_DEBUG("fb userspace requested width/height/bpp different than current fb "
"request %dx%d-%d (virtual %dx%d) > %dx%d-%d\n", "request %dx%d-%d (virtual %dx%d) > %dx%d-%d\n",
var->xres, var->yres, var->bits_per_pixel, var->xres, var->yres, var->bits_per_pixel,
var->xres_virtual, var->yres_virtual, var->xres_virtual, var->yres_virtual,
fb->width, fb->height, fb->bits_per_pixel); fb->width, fb->height, fb->format->cpp[0] * 8);
return -EINVAL; return -EINVAL;
} }
@ -1440,7 +1444,7 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
return -EBUSY; return -EBUSY;
} }
if (dev->mode_config.funcs->atomic_commit) { if (drm_drv_uses_atomic_modeset(dev)) {
ret = pan_display_atomic(var, info); ret = pan_display_atomic(var, info);
goto unlock; goto unlock;
} }
@ -1645,7 +1649,7 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helpe
info->pseudo_palette = fb_helper->pseudo_palette; info->pseudo_palette = fb_helper->pseudo_palette;
info->var.xres_virtual = fb->width; info->var.xres_virtual = fb->width;
info->var.yres_virtual = fb->height; info->var.yres_virtual = fb->height;
info->var.bits_per_pixel = fb->bits_per_pixel; info->var.bits_per_pixel = fb->format->cpp[0] * 8;
info->var.accel_flags = FB_ACCELF_TEXT; info->var.accel_flags = FB_ACCELF_TEXT;
info->var.xoffset = 0; info->var.xoffset = 0;
info->var.yoffset = 0; info->var.yoffset = 0;
@ -1653,7 +1657,7 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helpe
info->var.height = -1; info->var.height = -1;
info->var.width = -1; info->var.width = -1;
switch (fb->depth) { switch (fb->format->depth) {
case 8: case 8:
info->var.red.offset = 0; info->var.red.offset = 0;
info->var.green.offset = 0; info->var.green.offset = 0;
@ -2056,7 +2060,7 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
* NULL we fallback to the default drm_atomic_helper_best_encoder() * NULL we fallback to the default drm_atomic_helper_best_encoder()
* helper. * helper.
*/ */
if (fb_helper->dev->mode_config.funcs->atomic_commit && if (drm_drv_uses_atomic_modeset(fb_helper->dev) &&
!connector_funcs->best_encoder) !connector_funcs->best_encoder)
encoder = drm_atomic_helper_best_encoder(connector); encoder = drm_atomic_helper_best_encoder(connector);
else else

View File

@ -622,7 +622,7 @@ EXPORT_SYMBOL(drm_event_reserve_init_locked);
* kmalloc and @p must be the first member element. * kmalloc and @p must be the first member element.
* *
* Callers which already hold dev->event_lock should use * Callers which already hold dev->event_lock should use
* drm_event_reserve_init() instead. * drm_event_reserve_init_locked() instead.
* *
* RETURNS: * RETURNS:
* *

View File

@ -432,8 +432,8 @@ int drm_mode_getfb(struct drm_device *dev,
r->height = fb->height; r->height = fb->height;
r->width = fb->width; r->width = fb->width;
r->depth = fb->depth; r->depth = fb->format->depth;
r->bpp = fb->bits_per_pixel; r->bpp = fb->format->cpp[0] * 8;
r->pitch = fb->pitches[0]; r->pitch = fb->pitches[0];
if (fb->funcs->create_handle) { if (fb->funcs->create_handle) {
if (drm_is_current_master(file_priv) || capable(CAP_SYS_ADMIN) || if (drm_is_current_master(file_priv) || capable(CAP_SYS_ADMIN) ||
@ -631,8 +631,11 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
{ {
int ret; int ret;
if (WARN_ON_ONCE(fb->dev != dev || !fb->format))
return -EINVAL;
INIT_LIST_HEAD(&fb->filp_head); INIT_LIST_HEAD(&fb->filp_head);
fb->dev = dev;
fb->funcs = funcs; fb->funcs = funcs;
ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB, ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB,
@ -790,3 +793,47 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
drm_framebuffer_unreference(fb); drm_framebuffer_unreference(fb);
} }
EXPORT_SYMBOL(drm_framebuffer_remove); EXPORT_SYMBOL(drm_framebuffer_remove);
/**
* drm_framebuffer_plane_width - width of the plane given the first plane
* @width: width of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The width of @plane, given that the width of the first plane is @width.
*/
int drm_framebuffer_plane_width(int width,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
if (plane == 0)
return width;
return width / fb->format->hsub;
}
EXPORT_SYMBOL(drm_framebuffer_plane_width);
/**
* drm_framebuffer_plane_height - height of the plane given the first plane
* @height: height of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The height of @plane, given that the height of the first plane is @height.
*/
int drm_framebuffer_plane_height(int height,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
if (plane == 0)
return height;
return height / fb->format->vsub;
}
EXPORT_SYMBOL(drm_framebuffer_plane_height);

View File

@ -58,10 +58,10 @@ extern unsigned int drm_timestamp_monotonic;
/* IOCTLS */ /* IOCTLS */
int drm_wait_vblank(struct drm_device *dev, void *data, int drm_wait_vblank(struct drm_device *dev, void *data,
struct drm_file *filp); struct drm_file *filp);
int drm_control(struct drm_device *dev, void *data, int drm_legacy_irq_control(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
int drm_modeset_ctl(struct drm_device *dev, void *data, int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
/* drm_auth.c */ /* drm_auth.c */
int drm_getmagic(struct drm_device *dev, void *data, int drm_getmagic(struct drm_device *dev, void *data,

View File

@ -115,11 +115,15 @@ static int drm_getunique(struct drm_device *dev, void *data,
struct drm_unique *u = data; struct drm_unique *u = data;
struct drm_master *master = file_priv->master; struct drm_master *master = file_priv->master;
mutex_lock(&master->dev->master_mutex);
if (u->unique_len >= master->unique_len) { if (u->unique_len >= master->unique_len) {
if (copy_to_user(u->unique, master->unique, master->unique_len)) if (copy_to_user(u->unique, master->unique, master->unique_len)) {
mutex_unlock(&master->dev->master_mutex);
return -EFAULT; return -EFAULT;
}
} }
u->unique_len = master->unique_len; u->unique_len = master->unique_len;
mutex_unlock(&master->dev->master_mutex);
return 0; return 0;
} }
@ -340,6 +344,7 @@ static int drm_setversion(struct drm_device *dev, void *data, struct drm_file *f
struct drm_set_version *sv = data; struct drm_set_version *sv = data;
int if_version, retcode = 0; int if_version, retcode = 0;
mutex_lock(&dev->master_mutex);
if (sv->drm_di_major != -1) { if (sv->drm_di_major != -1) {
if (sv->drm_di_major != DRM_IF_MAJOR || if (sv->drm_di_major != DRM_IF_MAJOR ||
sv->drm_di_minor < 0 || sv->drm_di_minor > DRM_IF_MINOR) { sv->drm_di_minor < 0 || sv->drm_di_minor > DRM_IF_MINOR) {
@ -374,6 +379,7 @@ static int drm_setversion(struct drm_device *dev, void *data, struct drm_file *f
sv->drm_di_minor = DRM_IF_MINOR; sv->drm_di_minor = DRM_IF_MINOR;
sv->drm_dd_major = dev->driver->major; sv->drm_dd_major = dev->driver->major;
sv->drm_dd_minor = dev->driver->minor; sv->drm_dd_minor = dev->driver->minor;
mutex_unlock(&dev->master_mutex);
return retcode; return retcode;
} }
@ -528,15 +534,15 @@ EXPORT_SYMBOL(drm_ioctl_permit);
static const struct drm_ioctl_desc drm_ioctls[] = { static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version,
DRM_UNLOCKED|DRM_RENDER_ALLOW|DRM_CONTROL_ALLOW), DRM_UNLOCKED|DRM_RENDER_ALLOW|DRM_CONTROL_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0), DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_legacy_getmap_ioctl, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_legacy_getmap_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW), DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0), DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER), DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_UNLOCKED | DRM_MASTER),
DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
@ -575,7 +581,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#if IS_ENABLED(CONFIG_AGP) #if IS_ENABLED(CONFIG_AGP)
DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
@ -593,7 +599,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0), DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
@ -729,9 +735,8 @@ long drm_ioctl(struct file *filp,
if (ksize > in_size) if (ksize > in_size)
memset(kdata + in_size, 0, ksize - in_size); memset(kdata + in_size, 0, ksize - in_size);
/* Enforce sane locking for modern driver ioctls. Core ioctls are /* Enforce sane locking for modern driver ioctls. */
* too messy still. */ if (!drm_core_check_feature(dev, DRIVER_LEGACY) ||
if ((!drm_core_check_feature(dev, DRIVER_LEGACY) && is_driver_ioctl) ||
(ioctl->flags & DRM_UNLOCKED)) (ioctl->flags & DRM_UNLOCKED))
retcode = func(dev, kdata, file_priv); retcode = func(dev, kdata, file_priv);
else { else {

View File

@ -579,19 +579,8 @@ int drm_irq_uninstall(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_irq_uninstall); EXPORT_SYMBOL(drm_irq_uninstall);
/* int drm_legacy_irq_control(struct drm_device *dev, void *data,
* IRQ control ioctl. struct drm_file *file_priv)
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument, pointing to a drm_control structure.
* \return zero on success or a negative number on failure.
*
* Calls irq_install() or irq_uninstall() according to \p arg.
*/
int drm_control(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{ {
struct drm_control *ctl = data; struct drm_control *ctl = data;
int ret = 0, irq; int ret = 0, irq;
@ -1442,19 +1431,8 @@ static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
} }
} }
/* int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
* drm_modeset_ctl - handle vblank event counter changes across mode switch struct drm_file *file_priv)
* @DRM_IOCTL_ARGS: standard ioctl arguments
*
* Applications should call the %_DRM_PRE_MODESET and %_DRM_POST_MODESET
* ioctls around modesetting so that any lost vblank events are accounted for.
*
* Generally the counter will reset across mode sets. If interrupts are
* enabled around this call, we don't have to do anything since the counter
* will have already been incremented.
*/
int drm_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{ {
struct drm_modeset_ctl *modeset = data; struct drm_modeset_ctl *modeset = data;
unsigned int pipe; unsigned int pipe;

View File

@ -1,6 +1,7 @@
/************************************************************************** /**************************************************************************
* *
* Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA. * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA.
* Copyright 2016 Intel Corporation
* All Rights Reserved. * All Rights Reserved.
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -31,9 +32,9 @@
* class implementation for more advanced memory managers. * class implementation for more advanced memory managers.
* *
* Note that the algorithm used is quite simple and there might be substantial * Note that the algorithm used is quite simple and there might be substantial
* performance gains if a smarter free list is implemented. Currently it is just an * performance gains if a smarter free list is implemented. Currently it is
* unordered stack of free regions. This could easily be improved if an RB-tree * just an unordered stack of free regions. This could easily be improved if
* is used instead. At least if we expect heavy fragmentation. * an RB-tree is used instead. At least if we expect heavy fragmentation.
* *
* Aligned allocations can also see improvement. * Aligned allocations can also see improvement.
* *
@ -67,7 +68,7 @@
* where an object needs to be created which exactly matches the firmware's * where an object needs to be created which exactly matches the firmware's
* scanout target. As long as the range is still free it can be inserted anytime * scanout target. As long as the range is still free it can be inserted anytime
* after the allocator is initialized, which helps with avoiding looped * after the allocator is initialized, which helps with avoiding looped
* depencies in the driver load sequence. * dependencies in the driver load sequence.
* *
* drm_mm maintains a stack of most recently freed holes, which of all * drm_mm maintains a stack of most recently freed holes, which of all
* simplistic datastructures seems to be a fairly decent approach to clustering * simplistic datastructures seems to be a fairly decent approach to clustering
@ -78,27 +79,27 @@
* *
* drm_mm supports a few features: Alignment and range restrictions can be * drm_mm supports a few features: Alignment and range restrictions can be
* supplied. Further more every &drm_mm_node has a color value (which is just an * supplied. Further more every &drm_mm_node has a color value (which is just an
* opaqua unsigned long) which in conjunction with a driver callback can be used * opaque unsigned long) which in conjunction with a driver callback can be used
* to implement sophisticated placement restrictions. The i915 DRM driver uses * to implement sophisticated placement restrictions. The i915 DRM driver uses
* this to implement guard pages between incompatible caching domains in the * this to implement guard pages between incompatible caching domains in the
* graphics TT. * graphics TT.
* *
* Two behaviors are supported for searching and allocating: bottom-up and top-down. * Two behaviors are supported for searching and allocating: bottom-up and
* The default is bottom-up. Top-down allocation can be used if the memory area * top-down. The default is bottom-up. Top-down allocation can be used if the
* has different restrictions, or just to reduce fragmentation. * memory area has different restrictions, or just to reduce fragmentation.
* *
* Finally iteration helpers to walk all nodes and all holes are provided as are * Finally iteration helpers to walk all nodes and all holes are provided as are
* some basic allocator dumpers for debugging. * some basic allocator dumpers for debugging.
*
* Note that this range allocator is not thread-safe, drivers need to protect
* modifications with their on locking. The idea behind this is that for a full
* memory manager additional data needs to be protected anyway, hence internal
* locking would be fully redundant.
*/ */
static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
u64 size,
unsigned alignment,
unsigned long color,
enum drm_mm_search_flags flags);
static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
u64 size, u64 size,
unsigned alignment, u64 alignment,
unsigned long color, unsigned long color,
u64 start, u64 start,
u64 end, u64 end,
@ -138,7 +139,7 @@ static void show_leaks(struct drm_mm *mm)
if (!buf) if (!buf)
return; return;
list_for_each_entry(node, &mm->head_node.node_list, node_list) { list_for_each_entry(node, drm_mm_nodes(mm), node_list) {
struct stack_trace trace = { struct stack_trace trace = {
.entries = entries, .entries = entries,
.max_entries = STACKDEPTH .max_entries = STACKDEPTH
@ -174,9 +175,9 @@ INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
START, LAST, static inline, drm_mm_interval_tree) START, LAST, static inline, drm_mm_interval_tree)
struct drm_mm_node * struct drm_mm_node *
__drm_mm_interval_first(struct drm_mm *mm, u64 start, u64 last) __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
{ {
return drm_mm_interval_tree_iter_first(&mm->interval_tree, return drm_mm_interval_tree_iter_first((struct rb_root *)&mm->interval_tree,
start, last); start, last);
} }
EXPORT_SYMBOL(__drm_mm_interval_first); EXPORT_SYMBOL(__drm_mm_interval_first);
@ -227,8 +228,9 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node,
static void drm_mm_insert_helper(struct drm_mm_node *hole_node, static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
struct drm_mm_node *node, struct drm_mm_node *node,
u64 size, unsigned alignment, u64 size, u64 alignment,
unsigned long color, unsigned long color,
u64 range_start, u64 range_end,
enum drm_mm_allocator_flags flags) enum drm_mm_allocator_flags flags)
{ {
struct drm_mm *mm = hole_node->mm; struct drm_mm *mm = hole_node->mm;
@ -237,19 +239,21 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
u64 adj_start = hole_start; u64 adj_start = hole_start;
u64 adj_end = hole_end; u64 adj_end = hole_end;
BUG_ON(node->allocated); DRM_MM_BUG_ON(!drm_mm_hole_follows(hole_node) || node->allocated);
if (mm->color_adjust) if (mm->color_adjust)
mm->color_adjust(hole_node, color, &adj_start, &adj_end); mm->color_adjust(hole_node, color, &adj_start, &adj_end);
adj_start = max(adj_start, range_start);
adj_end = min(adj_end, range_end);
if (flags & DRM_MM_CREATE_TOP) if (flags & DRM_MM_CREATE_TOP)
adj_start = adj_end - size; adj_start = adj_end - size;
if (alignment) { if (alignment) {
u64 tmp = adj_start; u64 rem;
unsigned rem;
rem = do_div(tmp, alignment); div64_u64_rem(adj_start, alignment, &rem);
if (rem) { if (rem) {
if (flags & DRM_MM_CREATE_TOP) if (flags & DRM_MM_CREATE_TOP)
adj_start -= rem; adj_start -= rem;
@ -258,9 +262,6 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
} }
} }
BUG_ON(adj_start < hole_start);
BUG_ON(adj_end > hole_end);
if (adj_start == hole_start) { if (adj_start == hole_start) {
hole_node->hole_follows = 0; hole_node->hole_follows = 0;
list_del(&hole_node->hole_stack); list_del(&hole_node->hole_stack);
@ -276,7 +277,10 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
drm_mm_interval_tree_add_node(hole_node, node); drm_mm_interval_tree_add_node(hole_node, node);
BUG_ON(node->start + node->size > adj_end); DRM_MM_BUG_ON(node->start < range_start);
DRM_MM_BUG_ON(node->start < adj_start);
DRM_MM_BUG_ON(node->start + node->size > adj_end);
DRM_MM_BUG_ON(node->start + node->size > range_end);
node->hole_follows = 0; node->hole_follows = 0;
if (__drm_mm_hole_node_start(node) < hole_end) { if (__drm_mm_hole_node_start(node) < hole_end) {
@ -308,10 +312,9 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
u64 hole_start, hole_end; u64 hole_start, hole_end;
u64 adj_start, adj_end; u64 adj_start, adj_end;
if (WARN_ON(node->size == 0))
return -EINVAL;
end = node->start + node->size; end = node->start + node->size;
if (unlikely(end <= node->start))
return -ENOSPC;
/* Find the relevant hole to add our node to */ /* Find the relevant hole to add our node to */
hole = drm_mm_interval_tree_iter_first(&mm->interval_tree, hole = drm_mm_interval_tree_iter_first(&mm->interval_tree,
@ -320,12 +323,11 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
if (hole->start < end) if (hole->start < end)
return -ENOSPC; return -ENOSPC;
} else { } else {
hole = list_entry(&mm->head_node.node_list, hole = list_entry(drm_mm_nodes(mm), typeof(*hole), node_list);
typeof(*hole), node_list);
} }
hole = list_last_entry(&hole->node_list, typeof(*hole), node_list); hole = list_last_entry(&hole->node_list, typeof(*hole), node_list);
if (!hole->hole_follows) if (!drm_mm_hole_follows(hole))
return -ENOSPC; return -ENOSPC;
adj_start = hole_start = __drm_mm_hole_node_start(hole); adj_start = hole_start = __drm_mm_hole_node_start(hole);
@ -361,110 +363,6 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
} }
EXPORT_SYMBOL(drm_mm_reserve_node); EXPORT_SYMBOL(drm_mm_reserve_node);
/**
* drm_mm_insert_node_generic - search for space and insert @node
* @mm: drm_mm to allocate from
* @node: preallocate node to insert
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for this node
* @sflags: flags to fine-tune the allocation search
* @aflags: flags to fine-tune the allocation behavior
*
* The preallocated node must be cleared to 0.
*
* Returns:
* 0 on success, -ENOSPC if there's no suitable hole.
*/
int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
u64 size, unsigned alignment,
unsigned long color,
enum drm_mm_search_flags sflags,
enum drm_mm_allocator_flags aflags)
{
struct drm_mm_node *hole_node;
if (WARN_ON(size == 0))
return -EINVAL;
hole_node = drm_mm_search_free_generic(mm, size, alignment,
color, sflags);
if (!hole_node)
return -ENOSPC;
drm_mm_insert_helper(hole_node, node, size, alignment, color, aflags);
return 0;
}
EXPORT_SYMBOL(drm_mm_insert_node_generic);
static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
struct drm_mm_node *node,
u64 size, unsigned alignment,
unsigned long color,
u64 start, u64 end,
enum drm_mm_allocator_flags flags)
{
struct drm_mm *mm = hole_node->mm;
u64 hole_start = drm_mm_hole_node_start(hole_node);
u64 hole_end = drm_mm_hole_node_end(hole_node);
u64 adj_start = hole_start;
u64 adj_end = hole_end;
BUG_ON(!hole_node->hole_follows || node->allocated);
if (adj_start < start)
adj_start = start;
if (adj_end > end)
adj_end = end;
if (mm->color_adjust)
mm->color_adjust(hole_node, color, &adj_start, &adj_end);
if (flags & DRM_MM_CREATE_TOP)
adj_start = adj_end - size;
if (alignment) {
u64 tmp = adj_start;
unsigned rem;
rem = do_div(tmp, alignment);
if (rem) {
if (flags & DRM_MM_CREATE_TOP)
adj_start -= rem;
else
adj_start += alignment - rem;
}
}
if (adj_start == hole_start) {
hole_node->hole_follows = 0;
list_del(&hole_node->hole_stack);
}
node->start = adj_start;
node->size = size;
node->mm = mm;
node->color = color;
node->allocated = 1;
list_add(&node->node_list, &hole_node->node_list);
drm_mm_interval_tree_add_node(hole_node, node);
BUG_ON(node->start < start);
BUG_ON(node->start < adj_start);
BUG_ON(node->start + node->size > adj_end);
BUG_ON(node->start + node->size > end);
node->hole_follows = 0;
if (__drm_mm_hole_node_start(node) < hole_end) {
list_add(&node->hole_stack, &mm->hole_stack);
node->hole_follows = 1;
}
save_stack(node);
}
/** /**
* drm_mm_insert_node_in_range_generic - ranged search for space and insert @node * drm_mm_insert_node_in_range_generic - ranged search for space and insert @node
* @mm: drm_mm to allocate from * @mm: drm_mm to allocate from
@ -483,7 +381,7 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
* 0 on success, -ENOSPC if there's no suitable hole. * 0 on success, -ENOSPC if there's no suitable hole.
*/ */
int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node, int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
u64 size, unsigned alignment, u64 size, u64 alignment,
unsigned long color, unsigned long color,
u64 start, u64 end, u64 start, u64 end,
enum drm_mm_search_flags sflags, enum drm_mm_search_flags sflags,
@ -500,9 +398,9 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *n
if (!hole_node) if (!hole_node)
return -ENOSPC; return -ENOSPC;
drm_mm_insert_helper_range(hole_node, node, drm_mm_insert_helper(hole_node, node,
size, alignment, color, size, alignment, color,
start, end, aflags); start, end, aflags);
return 0; return 0;
} }
EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic); EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
@ -513,32 +411,29 @@ EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
* *
* This just removes a node from its drm_mm allocator. The node does not need to * This just removes a node from its drm_mm allocator. The node does not need to
* be cleared again before it can be re-inserted into this or any other drm_mm * be cleared again before it can be re-inserted into this or any other drm_mm
* allocator. It is a bug to call this function on a un-allocated node. * allocator. It is a bug to call this function on a unallocated node.
*/ */
void drm_mm_remove_node(struct drm_mm_node *node) void drm_mm_remove_node(struct drm_mm_node *node)
{ {
struct drm_mm *mm = node->mm; struct drm_mm *mm = node->mm;
struct drm_mm_node *prev_node; struct drm_mm_node *prev_node;
if (WARN_ON(!node->allocated)) DRM_MM_BUG_ON(!node->allocated);
return; DRM_MM_BUG_ON(node->scanned_block);
BUG_ON(node->scanned_block || node->scanned_prev_free
|| node->scanned_next_free);
prev_node = prev_node =
list_entry(node->node_list.prev, struct drm_mm_node, node_list); list_entry(node->node_list.prev, struct drm_mm_node, node_list);
if (node->hole_follows) { if (drm_mm_hole_follows(node)) {
BUG_ON(__drm_mm_hole_node_start(node) == DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) ==
__drm_mm_hole_node_end(node)); __drm_mm_hole_node_end(node));
list_del(&node->hole_stack); list_del(&node->hole_stack);
} else } else {
BUG_ON(__drm_mm_hole_node_start(node) != DRM_MM_BUG_ON(__drm_mm_hole_node_start(node) !=
__drm_mm_hole_node_end(node)); __drm_mm_hole_node_end(node));
}
if (!drm_mm_hole_follows(prev_node)) {
if (!prev_node->hole_follows) {
prev_node->hole_follows = 1; prev_node->hole_follows = 1;
list_add(&prev_node->hole_stack, &mm->hole_stack); list_add(&prev_node->hole_stack, &mm->hole_stack);
} else } else
@ -550,16 +445,15 @@ void drm_mm_remove_node(struct drm_mm_node *node)
} }
EXPORT_SYMBOL(drm_mm_remove_node); EXPORT_SYMBOL(drm_mm_remove_node);
static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment) static int check_free_hole(u64 start, u64 end, u64 size, u64 alignment)
{ {
if (end - start < size) if (end - start < size)
return 0; return 0;
if (alignment) { if (alignment) {
u64 tmp = start; u64 rem;
unsigned rem;
rem = do_div(tmp, alignment); div64_u64_rem(start, alignment, &rem);
if (rem) if (rem)
start += alignment - rem; start += alignment - rem;
} }
@ -567,51 +461,9 @@ static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment)
return end >= start + size; return end >= start + size;
} }
static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
u64 size,
unsigned alignment,
unsigned long color,
enum drm_mm_search_flags flags)
{
struct drm_mm_node *entry;
struct drm_mm_node *best;
u64 adj_start;
u64 adj_end;
u64 best_size;
BUG_ON(mm->scanned_blocks);
best = NULL;
best_size = ~0UL;
__drm_mm_for_each_hole(entry, mm, adj_start, adj_end,
flags & DRM_MM_SEARCH_BELOW) {
u64 hole_size = adj_end - adj_start;
if (mm->color_adjust) {
mm->color_adjust(entry, color, &adj_start, &adj_end);
if (adj_end <= adj_start)
continue;
}
if (!check_free_hole(adj_start, adj_end, size, alignment))
continue;
if (!(flags & DRM_MM_SEARCH_BEST))
return entry;
if (hole_size < best_size) {
best = entry;
best_size = hole_size;
}
}
return best;
}
static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
u64 size, u64 size,
unsigned alignment, u64 alignment,
unsigned long color, unsigned long color,
u64 start, u64 start,
u64 end, u64 end,
@ -623,7 +475,7 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
u64 adj_end; u64 adj_end;
u64 best_size; u64 best_size;
BUG_ON(mm->scanned_blocks); DRM_MM_BUG_ON(mm->scan_active);
best = NULL; best = NULL;
best_size = ~0UL; best_size = ~0UL;
@ -632,17 +484,15 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
flags & DRM_MM_SEARCH_BELOW) { flags & DRM_MM_SEARCH_BELOW) {
u64 hole_size = adj_end - adj_start; u64 hole_size = adj_end - adj_start;
if (adj_start < start)
adj_start = start;
if (adj_end > end)
adj_end = end;
if (mm->color_adjust) { if (mm->color_adjust) {
mm->color_adjust(entry, color, &adj_start, &adj_end); mm->color_adjust(entry, color, &adj_start, &adj_end);
if (adj_end <= adj_start) if (adj_end <= adj_start)
continue; continue;
} }
adj_start = max(adj_start, start);
adj_end = min(adj_end, end);
if (!check_free_hole(adj_start, adj_end, size, alignment)) if (!check_free_hole(adj_start, adj_end, size, alignment))
continue; continue;
@ -669,6 +519,8 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
*/ */
void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
{ {
DRM_MM_BUG_ON(!old->allocated);
list_replace(&old->node_list, &new->node_list); list_replace(&old->node_list, &new->node_list);
list_replace(&old->hole_stack, &new->hole_stack); list_replace(&old->hole_stack, &new->hole_stack);
rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree); rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree);
@ -692,96 +544,82 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* efficient when we simply start to select all objects from the tail of an LRU * efficient when we simply start to select all objects from the tail of an LRU
* until there's a suitable hole: Especially for big objects or nodes that * until there's a suitable hole: Especially for big objects or nodes that
* otherwise have special allocation constraints there's a good chance we evict * otherwise have special allocation constraints there's a good chance we evict
* lots of (smaller) objects unecessarily. * lots of (smaller) objects unnecessarily.
* *
* The DRM range allocator supports this use-case through the scanning * The DRM range allocator supports this use-case through the scanning
* interfaces. First a scan operation needs to be initialized with * interfaces. First a scan operation needs to be initialized with
* drm_mm_init_scan() or drm_mm_init_scan_with_range(). The the driver adds * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds
* objects to the roaster (probably by walking an LRU list, but this can be * objects to the roster (probably by walking an LRU list, but this can be
* freely implemented) until a suitable hole is found or there's no further * freely implemented) (using drm_mm_scan_add_block()) until a suitable hole
* evitable object. * is found or there are no further evictable objects.
* *
* The the driver must walk through all objects again in exactly the reverse * The driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used * order to restore the allocator state. Note that while the allocator is used
* in the scan mode no other operation is allowed. * in the scan mode no other operation is allowed.
* *
* Finally the driver evicts all objects selected in the scan. Adding and * Finally the driver evicts all objects selected (drm_mm_scan_remove_block()
* removing an object is O(1), and since freeing a node is also O(1) the overall * reported true) in the scan, and any overlapping nodes after color adjustment
* complexity is O(scanned_objects). So like the free stack which needs to be * (drm_mm_scan_evict_color()). Adding and removing an object is O(1), and
* walked before a scan operation even begins this is linear in the number of * since freeing a node is also O(1) the overall complexity is
* objects. It doesn't seem to hurt badly. * O(scanned_objects). So like the free stack which needs to be walked before a
* scan operation even begins this is linear in the number of objects. It
* doesn't seem to hurt too badly.
*/ */
/** /**
* drm_mm_init_scan - initialize lru scanning * drm_mm_scan_init_with_range - initialize range-restricted lru scanning
* @mm: drm_mm to scan * @scan: scan state
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for the allocation
*
* This simply sets up the scanning routines with the parameters for the desired
* hole. Note that there's no need to specify allocation flags, since they only
* change the place a node is allocated from within a suitable hole.
*
* Warning:
* As long as the scan list is non-empty, no other operations than
* adding/removing nodes to/from the scan list are allowed.
*/
void drm_mm_init_scan(struct drm_mm *mm,
u64 size,
unsigned alignment,
unsigned long color)
{
mm->scan_color = color;
mm->scan_alignment = alignment;
mm->scan_size = size;
mm->scanned_blocks = 0;
mm->scan_hit_start = 0;
mm->scan_hit_end = 0;
mm->scan_check_range = 0;
mm->prev_scanned_node = NULL;
}
EXPORT_SYMBOL(drm_mm_init_scan);
/**
* drm_mm_init_scan - initialize range-restricted lru scanning
* @mm: drm_mm to scan * @mm: drm_mm to scan
* @size: size of the allocation * @size: size of the allocation
* @alignment: alignment of the allocation * @alignment: alignment of the allocation
* @color: opaque tag value to use for the allocation * @color: opaque tag value to use for the allocation
* @start: start of the allowed range for the allocation * @start: start of the allowed range for the allocation
* @end: end of the allowed range for the allocation * @end: end of the allowed range for the allocation
* @flags: flags to specify how the allocation will be performed afterwards
* *
* This simply sets up the scanning routines with the parameters for the desired * This simply sets up the scanning routines with the parameters for the desired
* hole. Note that there's no need to specify allocation flags, since they only * hole.
* change the place a node is allocated from within a suitable hole.
* *
* Warning: * Warning:
* As long as the scan list is non-empty, no other operations than * As long as the scan list is non-empty, no other operations than
* adding/removing nodes to/from the scan list are allowed. * adding/removing nodes to/from the scan list are allowed.
*/ */
void drm_mm_init_scan_with_range(struct drm_mm *mm, void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
struct drm_mm *mm,
u64 size, u64 size,
unsigned alignment, u64 alignment,
unsigned long color, unsigned long color,
u64 start, u64 start,
u64 end) u64 end,
unsigned int flags)
{ {
mm->scan_color = color; DRM_MM_BUG_ON(start >= end);
mm->scan_alignment = alignment; DRM_MM_BUG_ON(!size || size > end - start);
mm->scan_size = size; DRM_MM_BUG_ON(mm->scan_active);
mm->scanned_blocks = 0;
mm->scan_hit_start = 0; scan->mm = mm;
mm->scan_hit_end = 0;
mm->scan_start = start; if (alignment <= 1)
mm->scan_end = end; alignment = 0;
mm->scan_check_range = 1;
mm->prev_scanned_node = NULL; scan->color = color;
scan->alignment = alignment;
scan->remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0;
scan->size = size;
scan->flags = flags;
DRM_MM_BUG_ON(end <= start);
scan->range_start = start;
scan->range_end = end;
scan->hit_start = U64_MAX;
scan->hit_end = 0;
} }
EXPORT_SYMBOL(drm_mm_init_scan_with_range); EXPORT_SYMBOL(drm_mm_scan_init_with_range);
/** /**
* drm_mm_scan_add_block - add a node to the scan list * drm_mm_scan_add_block - add a node to the scan list
* @scan: the active drm_mm scanner
* @node: drm_mm_node to add * @node: drm_mm_node to add
* *
* Add a node to the scan list that might be freed to make space for the desired * Add a node to the scan list that might be freed to make space for the desired
@ -790,60 +628,87 @@ EXPORT_SYMBOL(drm_mm_init_scan_with_range);
* Returns: * Returns:
* True if a hole has been found, false otherwise. * True if a hole has been found, false otherwise.
*/ */
bool drm_mm_scan_add_block(struct drm_mm_node *node) bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
struct drm_mm_node *node)
{ {
struct drm_mm *mm = node->mm; struct drm_mm *mm = scan->mm;
struct drm_mm_node *prev_node; struct drm_mm_node *hole;
u64 hole_start, hole_end; u64 hole_start, hole_end;
u64 col_start, col_end;
u64 adj_start, adj_end; u64 adj_start, adj_end;
mm->scanned_blocks++; DRM_MM_BUG_ON(node->mm != mm);
DRM_MM_BUG_ON(!node->allocated);
DRM_MM_BUG_ON(node->scanned_block);
node->scanned_block = true;
mm->scan_active++;
BUG_ON(node->scanned_block); /* Remove this block from the node_list so that we enlarge the hole
node->scanned_block = 1; * (distance between the end of our previous node and the start of
* or next), without poisoning the link so that we can restore it
* later in drm_mm_scan_remove_block().
*/
hole = list_prev_entry(node, node_list);
DRM_MM_BUG_ON(list_next_entry(hole, node_list) != node);
__list_del_entry(&node->node_list);
prev_node = list_entry(node->node_list.prev, struct drm_mm_node, hole_start = __drm_mm_hole_node_start(hole);
node_list); hole_end = __drm_mm_hole_node_end(hole);
node->scanned_preceeds_hole = prev_node->hole_follows;
prev_node->hole_follows = 1;
list_del(&node->node_list);
node->node_list.prev = &prev_node->node_list;
node->node_list.next = &mm->prev_scanned_node->node_list;
mm->prev_scanned_node = node;
adj_start = hole_start = drm_mm_hole_node_start(prev_node);
adj_end = hole_end = drm_mm_hole_node_end(prev_node);
if (mm->scan_check_range) {
if (adj_start < mm->scan_start)
adj_start = mm->scan_start;
if (adj_end > mm->scan_end)
adj_end = mm->scan_end;
}
col_start = hole_start;
col_end = hole_end;
if (mm->color_adjust) if (mm->color_adjust)
mm->color_adjust(prev_node, mm->scan_color, mm->color_adjust(hole, scan->color, &col_start, &col_end);
&adj_start, &adj_end);
if (check_free_hole(adj_start, adj_end, adj_start = max(col_start, scan->range_start);
mm->scan_size, mm->scan_alignment)) { adj_end = min(col_end, scan->range_end);
mm->scan_hit_start = hole_start; if (adj_end <= adj_start || adj_end - adj_start < scan->size)
mm->scan_hit_end = hole_end; return false;
return true;
if (scan->flags == DRM_MM_CREATE_TOP)
adj_start = adj_end - scan->size;
if (scan->alignment) {
u64 rem;
if (likely(scan->remainder_mask))
rem = adj_start & scan->remainder_mask;
else
div64_u64_rem(adj_start, scan->alignment, &rem);
if (rem) {
adj_start -= rem;
if (scan->flags != DRM_MM_CREATE_TOP)
adj_start += scan->alignment;
if (adj_start < max(col_start, scan->range_start) ||
min(col_end, scan->range_end) - adj_start < scan->size)
return false;
if (adj_end <= adj_start ||
adj_end - adj_start < scan->size)
return false;
}
} }
return false; scan->hit_start = adj_start;
scan->hit_end = adj_start + scan->size;
DRM_MM_BUG_ON(scan->hit_start >= scan->hit_end);
DRM_MM_BUG_ON(scan->hit_start < hole_start);
DRM_MM_BUG_ON(scan->hit_end > hole_end);
return true;
} }
EXPORT_SYMBOL(drm_mm_scan_add_block); EXPORT_SYMBOL(drm_mm_scan_add_block);
/** /**
* drm_mm_scan_remove_block - remove a node from the scan list * drm_mm_scan_remove_block - remove a node from the scan list
* @scan: the active drm_mm scanner
* @node: drm_mm_node to remove * @node: drm_mm_node to remove
* *
* Nodes _must_ be removed in the exact same order from the scan list as they * Nodes _must_ be removed in exactly the reverse order from the scan list as
* have been added, otherwise the internal state of the memory manager will be * they have been added (e.g. using list_add as they are added and then
* corrupted. * list_for_each over that eviction list to remove), otherwise the internal
* state of the memory manager will be corrupted.
* *
* When the scan list is empty, the selected memory nodes can be freed. An * When the scan list is empty, the selected memory nodes can be freed. An
* immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then * immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then
@ -853,42 +718,74 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* True if this block should be evicted, false otherwise. Will always * True if this block should be evicted, false otherwise. Will always
* return false when no hole has been found. * return false when no hole has been found.
*/ */
bool drm_mm_scan_remove_block(struct drm_mm_node *node) bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
struct drm_mm_node *node)
{ {
struct drm_mm *mm = node->mm;
struct drm_mm_node *prev_node; struct drm_mm_node *prev_node;
mm->scanned_blocks--; DRM_MM_BUG_ON(node->mm != scan->mm);
DRM_MM_BUG_ON(!node->scanned_block);
node->scanned_block = false;
BUG_ON(!node->scanned_block); DRM_MM_BUG_ON(!node->mm->scan_active);
node->scanned_block = 0; node->mm->scan_active--;
prev_node = list_entry(node->node_list.prev, struct drm_mm_node, /* During drm_mm_scan_add_block() we decoupled this node leaving
node_list); * its pointers intact. Now that the caller is walking back along
* the eviction list we can restore this block into its rightful
prev_node->hole_follows = node->scanned_preceeds_hole; * place on the full node_list. To confirm that the caller is walking
* backwards correctly we check that prev_node->next == node->next,
* i.e. both believe the same node should be on the other side of the
* hole.
*/
prev_node = list_prev_entry(node, node_list);
DRM_MM_BUG_ON(list_next_entry(prev_node, node_list) !=
list_next_entry(node, node_list));
list_add(&node->node_list, &prev_node->node_list); list_add(&node->node_list, &prev_node->node_list);
return (drm_mm_hole_node_end(node) > mm->scan_hit_start && return (node->start + node->size > scan->hit_start &&
node->start < mm->scan_hit_end); node->start < scan->hit_end);
} }
EXPORT_SYMBOL(drm_mm_scan_remove_block); EXPORT_SYMBOL(drm_mm_scan_remove_block);
/** /**
* drm_mm_clean - checks whether an allocator is clean * drm_mm_scan_color_evict - evict overlapping nodes on either side of hole
* @mm: drm_mm allocator to check * @scan: drm_mm scan with target hole
*
* After completing an eviction scan and removing the selected nodes, we may
* need to remove a few more nodes from either side of the target hole if
* mm.color_adjust is being used.
* *
* Returns: * Returns:
* True if the allocator is completely free, false if there's still a node * A node to evict, or NULL if there are no overlapping nodes.
* allocated in it.
*/ */
bool drm_mm_clean(struct drm_mm * mm) struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan)
{ {
struct list_head *head = &mm->head_node.node_list; struct drm_mm *mm = scan->mm;
struct drm_mm_node *hole;
u64 hole_start, hole_end;
return (head->next->next == head); DRM_MM_BUG_ON(list_empty(&mm->hole_stack));
if (!mm->color_adjust)
return NULL;
hole = list_first_entry(&mm->hole_stack, typeof(*hole), hole_stack);
hole_start = __drm_mm_hole_node_start(hole);
hole_end = __drm_mm_hole_node_end(hole);
DRM_MM_BUG_ON(hole_start > scan->hit_start);
DRM_MM_BUG_ON(hole_end < scan->hit_end);
mm->color_adjust(hole, scan->color, &hole_start, &hole_end);
if (hole_start > scan->hit_start)
return hole;
if (hole_end < scan->hit_end)
return list_next_entry(hole, node_list);
return NULL;
} }
EXPORT_SYMBOL(drm_mm_clean); EXPORT_SYMBOL(drm_mm_scan_color_evict);
/** /**
* drm_mm_init - initialize a drm-mm allocator * drm_mm_init - initialize a drm-mm allocator
@ -898,18 +795,17 @@ EXPORT_SYMBOL(drm_mm_clean);
* *
* Note that @mm must be cleared to 0 before calling this function. * Note that @mm must be cleared to 0 before calling this function.
*/ */
void drm_mm_init(struct drm_mm * mm, u64 start, u64 size) void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
{ {
DRM_MM_BUG_ON(start + size <= start);
INIT_LIST_HEAD(&mm->hole_stack); INIT_LIST_HEAD(&mm->hole_stack);
mm->scanned_blocks = 0; mm->scan_active = 0;
/* Clever trick to avoid a special case in the free hole tracking. */ /* Clever trick to avoid a special case in the free hole tracking. */
INIT_LIST_HEAD(&mm->head_node.node_list); INIT_LIST_HEAD(&mm->head_node.node_list);
mm->head_node.allocated = 0; mm->head_node.allocated = 0;
mm->head_node.hole_follows = 1; mm->head_node.hole_follows = 1;
mm->head_node.scanned_block = 0;
mm->head_node.scanned_prev_free = 0;
mm->head_node.scanned_next_free = 0;
mm->head_node.mm = mm; mm->head_node.mm = mm;
mm->head_node.start = start + size; mm->head_node.start = start + size;
mm->head_node.size = start - mm->head_node.start; mm->head_node.size = start - mm->head_node.start;
@ -930,15 +826,14 @@ EXPORT_SYMBOL(drm_mm_init);
*/ */
void drm_mm_takedown(struct drm_mm *mm) void drm_mm_takedown(struct drm_mm *mm)
{ {
if (WARN(!list_empty(&mm->head_node.node_list), if (WARN(!drm_mm_clean(mm),
"Memory manager not clean during takedown.\n")) "Memory manager not clean during takedown.\n"))
show_leaks(mm); show_leaks(mm);
} }
EXPORT_SYMBOL(drm_mm_takedown); EXPORT_SYMBOL(drm_mm_takedown);
static u64 drm_mm_debug_hole(struct drm_mm_node *entry, static u64 drm_mm_debug_hole(const struct drm_mm_node *entry,
const char *prefix) const char *prefix)
{ {
u64 hole_start, hole_end, hole_size; u64 hole_start, hole_end, hole_size;
@ -959,9 +854,9 @@ static u64 drm_mm_debug_hole(struct drm_mm_node *entry,
* @mm: drm_mm allocator to dump * @mm: drm_mm allocator to dump
* @prefix: prefix to use for dumping to dmesg * @prefix: prefix to use for dumping to dmesg
*/ */
void drm_mm_debug_table(struct drm_mm *mm, const char *prefix) void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix)
{ {
struct drm_mm_node *entry; const struct drm_mm_node *entry;
u64 total_used = 0, total_free = 0, total = 0; u64 total_used = 0, total_free = 0, total = 0;
total_free += drm_mm_debug_hole(&mm->head_node, prefix); total_free += drm_mm_debug_hole(&mm->head_node, prefix);
@ -980,7 +875,7 @@ void drm_mm_debug_table(struct drm_mm *mm, const char *prefix)
EXPORT_SYMBOL(drm_mm_debug_table); EXPORT_SYMBOL(drm_mm_debug_table);
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry)
{ {
u64 hole_start, hole_end, hole_size; u64 hole_start, hole_end, hole_size;
@ -1001,9 +896,9 @@ static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry)
* @m: seq_file to dump to * @m: seq_file to dump to
* @mm: drm_mm allocator to dump * @mm: drm_mm allocator to dump
*/ */
int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm) int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm)
{ {
struct drm_mm_node *entry; const struct drm_mm_node *entry;
u64 total_used = 0, total_free = 0, total = 0; u64 total_used = 0, total_free = 0, total = 0;
total_free += drm_mm_dump_hole(m, &mm->head_node); total_free += drm_mm_dump_hole(m, &mm->head_node);

View File

@ -20,6 +20,7 @@
* OF THIS SOFTWARE. * OF THIS SOFTWARE.
*/ */
#include <drm/drm_encoder.h>
#include <drm/drm_mode_config.h> #include <drm/drm_mode_config.h>
#include <drm/drmP.h> #include <drm/drmP.h>
@ -84,113 +85,74 @@ int drm_mode_getresources(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_mode_card_res *card_res = data; struct drm_mode_card_res *card_res = data;
struct list_head *lh;
struct drm_framebuffer *fb; struct drm_framebuffer *fb;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_encoder *encoder; struct drm_encoder *encoder;
int ret = 0; int count, ret = 0;
int connector_count = 0;
int crtc_count = 0;
int fb_count = 0;
int encoder_count = 0;
int copied = 0;
uint32_t __user *fb_id; uint32_t __user *fb_id;
uint32_t __user *crtc_id; uint32_t __user *crtc_id;
uint32_t __user *connector_id; uint32_t __user *connector_id;
uint32_t __user *encoder_id; uint32_t __user *encoder_id;
struct drm_connector_list_iter conn_iter;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL; return -EINVAL;
mutex_lock(&file_priv->fbs_lock); mutex_lock(&file_priv->fbs_lock);
/* count = 0;
* For the non-control nodes we need to limit the list of resources fb_id = u64_to_user_ptr(card_res->fb_id_ptr);
* by IDs in the group list for this node list_for_each_entry(fb, &file_priv->fbs, filp_head) {
*/ if (count < card_res->count_fbs &&
list_for_each(lh, &file_priv->fbs) put_user(fb->base.id, fb_id + count)) {
fb_count++; mutex_unlock(&file_priv->fbs_lock);
return -EFAULT;
/* handle this in 4 parts */
/* FBs */
if (card_res->count_fbs >= fb_count) {
copied = 0;
fb_id = (uint32_t __user *)(unsigned long)card_res->fb_id_ptr;
list_for_each_entry(fb, &file_priv->fbs, filp_head) {
if (put_user(fb->base.id, fb_id + copied)) {
mutex_unlock(&file_priv->fbs_lock);
return -EFAULT;
}
copied++;
} }
count++;
} }
card_res->count_fbs = fb_count; card_res->count_fbs = count;
mutex_unlock(&file_priv->fbs_lock); mutex_unlock(&file_priv->fbs_lock);
/* mode_config.mutex protects the connector list against e.g. DP MST
* connector hot-adding. CRTC/Plane lists are invariant. */
mutex_lock(&dev->mode_config.mutex);
drm_for_each_crtc(crtc, dev)
crtc_count++;
drm_for_each_connector(connector, dev)
connector_count++;
drm_for_each_encoder(encoder, dev)
encoder_count++;
card_res->max_height = dev->mode_config.max_height; card_res->max_height = dev->mode_config.max_height;
card_res->min_height = dev->mode_config.min_height; card_res->min_height = dev->mode_config.min_height;
card_res->max_width = dev->mode_config.max_width; card_res->max_width = dev->mode_config.max_width;
card_res->min_width = dev->mode_config.min_width; card_res->min_width = dev->mode_config.min_width;
/* CRTCs */ count = 0;
if (card_res->count_crtcs >= crtc_count) { crtc_id = u64_to_user_ptr(card_res->crtc_id_ptr);
copied = 0; drm_for_each_crtc(crtc, dev) {
crtc_id = (uint32_t __user *)(unsigned long)card_res->crtc_id_ptr; if (count < card_res->count_crtcs &&
drm_for_each_crtc(crtc, dev) { put_user(crtc->base.id, crtc_id + count))
if (put_user(crtc->base.id, crtc_id + copied)) { return -EFAULT;
ret = -EFAULT; count++;
goto out;
}
copied++;
}
} }
card_res->count_crtcs = crtc_count; card_res->count_crtcs = count;
/* Encoders */ count = 0;
if (card_res->count_encoders >= encoder_count) { encoder_id = u64_to_user_ptr(card_res->encoder_id_ptr);
copied = 0; drm_for_each_encoder(encoder, dev) {
encoder_id = (uint32_t __user *)(unsigned long)card_res->encoder_id_ptr; if (count < card_res->count_encoders &&
drm_for_each_encoder(encoder, dev) { put_user(encoder->base.id, encoder_id + count))
if (put_user(encoder->base.id, encoder_id + return -EFAULT;
copied)) { count++;
ret = -EFAULT;
goto out;
}
copied++;
}
} }
card_res->count_encoders = encoder_count; card_res->count_encoders = count;
/* Connectors */ drm_connector_list_iter_get(dev, &conn_iter);
if (card_res->count_connectors >= connector_count) { count = 0;
copied = 0; connector_id = u64_to_user_ptr(card_res->connector_id_ptr);
connector_id = (uint32_t __user *)(unsigned long)card_res->connector_id_ptr; drm_for_each_connector_iter(connector, &conn_iter) {
drm_for_each_connector(connector, dev) { if (count < card_res->count_connectors &&
if (put_user(connector->base.id, put_user(connector->base.id, connector_id + count)) {
connector_id + copied)) { drm_connector_list_iter_put(&conn_iter);
ret = -EFAULT; return -EFAULT;
goto out;
}
copied++;
} }
count++;
} }
card_res->count_connectors = connector_count; card_res->count_connectors = count;
drm_connector_list_iter_put(&conn_iter);
out:
mutex_unlock(&dev->mode_config.mutex);
return ret; return ret;
} }
@ -208,6 +170,7 @@ void drm_mode_config_reset(struct drm_device *dev)
struct drm_plane *plane; struct drm_plane *plane;
struct drm_encoder *encoder; struct drm_encoder *encoder;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
drm_for_each_plane(plane, dev) drm_for_each_plane(plane, dev)
if (plane->funcs->reset) if (plane->funcs->reset)
@ -221,11 +184,11 @@ void drm_mode_config_reset(struct drm_device *dev)
if (encoder->funcs->reset) if (encoder->funcs->reset)
encoder->funcs->reset(encoder); encoder->funcs->reset(encoder);
mutex_lock(&dev->mode_config.mutex); drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector(connector, dev) drm_for_each_connector_iter(connector, &conn_iter)
if (connector->funcs->reset) if (connector->funcs->reset)
connector->funcs->reset(connector); connector->funcs->reset(connector);
mutex_unlock(&dev->mode_config.mutex); drm_connector_list_iter_put(&conn_iter);
} }
EXPORT_SYMBOL(drm_mode_config_reset); EXPORT_SYMBOL(drm_mode_config_reset);
@ -406,10 +369,9 @@ void drm_mode_config_init(struct drm_device *dev)
idr_init(&dev->mode_config.crtc_idr); idr_init(&dev->mode_config.crtc_idr);
idr_init(&dev->mode_config.tile_idr); idr_init(&dev->mode_config.tile_idr);
ida_init(&dev->mode_config.connector_ida); ida_init(&dev->mode_config.connector_ida);
spin_lock_init(&dev->mode_config.connector_list_lock);
drm_modeset_lock_all(dev);
drm_mode_create_standard_properties(dev); drm_mode_create_standard_properties(dev);
drm_modeset_unlock_all(dev);
/* Just to be sure */ /* Just to be sure */
dev->mode_config.num_fb = 0; dev->mode_config.num_fb = 0;
@ -436,7 +398,8 @@ EXPORT_SYMBOL(drm_mode_config_init);
*/ */
void drm_mode_config_cleanup(struct drm_device *dev) void drm_mode_config_cleanup(struct drm_device *dev)
{ {
struct drm_connector *connector, *ot; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_crtc *crtc, *ct; struct drm_crtc *crtc, *ct;
struct drm_encoder *encoder, *enct; struct drm_encoder *encoder, *enct;
struct drm_framebuffer *fb, *fbt; struct drm_framebuffer *fb, *fbt;
@ -449,10 +412,16 @@ void drm_mode_config_cleanup(struct drm_device *dev)
encoder->funcs->destroy(encoder); encoder->funcs->destroy(encoder);
} }
list_for_each_entry_safe(connector, ot, drm_connector_list_iter_get(dev, &conn_iter);
&dev->mode_config.connector_list, head) { drm_for_each_connector_iter(connector, &conn_iter) {
connector->funcs->destroy(connector); /* drm_connector_list_iter holds an full reference to the
* current connector itself, which means it is inherently safe
* against unreferencing the current connector - but not against
* deleting it right away. */
drm_connector_unreference(connector);
} }
drm_connector_list_iter_put(&conn_iter);
WARN_ON(!list_empty(&dev->mode_config.connector_list));
list_for_each_entry_safe(property, pt, &dev->mode_config.property_list, list_for_each_entry_safe(property, pt, &dev->mode_config.property_list,
head) { head) {

View File

@ -23,6 +23,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_mode_object.h> #include <drm/drm_mode_object.h>
#include <drm/drm_atomic.h>
#include "drm_crtc_internal.h" #include "drm_crtc_internal.h"
@ -273,7 +274,7 @@ int drm_object_property_get_value(struct drm_mode_object *obj,
* their value in obj->properties->values[].. mostly to avoid * their value in obj->properties->values[].. mostly to avoid
* having to deal w/ EDID and similar props in atomic paths: * having to deal w/ EDID and similar props in atomic paths:
*/ */
if (drm_core_check_feature(property->dev, DRIVER_ATOMIC) && if (drm_drv_uses_atomic_modeset(property->dev) &&
!(property->flags & DRM_MODE_PROP_IMMUTABLE)) !(property->flags & DRM_MODE_PROP_IMMUTABLE))
return drm_atomic_get_property(obj, property, val); return drm_atomic_get_property(obj, property, val);

View File

@ -48,6 +48,7 @@ void drm_helper_move_panel_connectors_to_head(struct drm_device *dev)
INIT_LIST_HEAD(&panel_list); INIT_LIST_HEAD(&panel_list);
spin_lock_irq(&dev->mode_config.connector_list_lock);
list_for_each_entry_safe(connector, tmp, list_for_each_entry_safe(connector, tmp,
&dev->mode_config.connector_list, head) { &dev->mode_config.connector_list, head) {
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS || if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS ||
@ -57,38 +58,27 @@ void drm_helper_move_panel_connectors_to_head(struct drm_device *dev)
} }
list_splice(&panel_list, &dev->mode_config.connector_list); list_splice(&panel_list, &dev->mode_config.connector_list);
spin_unlock_irq(&dev->mode_config.connector_list_lock);
} }
EXPORT_SYMBOL(drm_helper_move_panel_connectors_to_head); EXPORT_SYMBOL(drm_helper_move_panel_connectors_to_head);
/** /**
* drm_helper_mode_fill_fb_struct - fill out framebuffer metadata * drm_helper_mode_fill_fb_struct - fill out framebuffer metadata
* @dev: DRM device
* @fb: drm_framebuffer object to fill out * @fb: drm_framebuffer object to fill out
* @mode_cmd: metadata from the userspace fb creation request * @mode_cmd: metadata from the userspace fb creation request
* *
* This helper can be used in a drivers fb_create callback to pre-fill the fb's * This helper can be used in a drivers fb_create callback to pre-fill the fb's
* metadata fields. * metadata fields.
*/ */
void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb, void drm_helper_mode_fill_fb_struct(struct drm_device *dev,
struct drm_framebuffer *fb,
const struct drm_mode_fb_cmd2 *mode_cmd) const struct drm_mode_fb_cmd2 *mode_cmd)
{ {
const struct drm_format_info *info;
int i; int i;
info = drm_format_info(mode_cmd->pixel_format); fb->dev = dev;
if (!info || !info->depth) { fb->format = drm_format_info(mode_cmd->pixel_format);
struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("non-RGB pixel format %s\n",
drm_get_format_name(mode_cmd->pixel_format,
&format_name));
fb->depth = 0;
fb->bits_per_pixel = 0;
} else {
fb->depth = info->depth;
fb->bits_per_pixel = info->cpp[0] * 8;
}
fb->width = mode_cmd->width; fb->width = mode_cmd->width;
fb->height = mode_cmd->height; fb->height = mode_cmd->height;
for (i = 0; i < 4; i++) { for (i = 0; i < 4; i++) {
@ -96,7 +86,6 @@ void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
fb->offsets[i] = mode_cmd->offsets[i]; fb->offsets[i] = mode_cmd->offsets[i];
} }
fb->modifier = mode_cmd->modifier[0]; fb->modifier = mode_cmd->modifier[0];
fb->pixel_format = mode_cmd->pixel_format;
fb->flags = mode_cmd->flags; fb->flags = mode_cmd->flags;
} }
EXPORT_SYMBOL(drm_helper_mode_fill_fb_struct); EXPORT_SYMBOL(drm_helper_mode_fill_fb_struct);

View File

@ -4,6 +4,7 @@
#include <linux/of_graph.h> #include <linux/of_graph.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
static void drm_release_of(struct device *dev, void *data) static void drm_release_of(struct device *dev, void *data)

View File

@ -392,12 +392,16 @@ int drm_mode_getplane(struct drm_device *dev, void *data,
return -ENOENT; return -ENOENT;
drm_modeset_lock(&plane->mutex, NULL); drm_modeset_lock(&plane->mutex, NULL);
if (plane->crtc) if (plane->state && plane->state->crtc)
plane_resp->crtc_id = plane->state->crtc->base.id;
else if (!plane->state && plane->crtc)
plane_resp->crtc_id = plane->crtc->base.id; plane_resp->crtc_id = plane->crtc->base.id;
else else
plane_resp->crtc_id = 0; plane_resp->crtc_id = 0;
if (plane->fb) if (plane->state && plane->state->fb)
plane_resp->fb_id = plane->state->fb->base.id;
else if (!plane->state && plane->fb)
plane_resp->fb_id = plane->fb->base.id; plane_resp->fb_id = plane->fb->base.id;
else else
plane_resp->fb_id = 0; plane_resp->fb_id = 0;
@ -478,11 +482,11 @@ static int __setplane_internal(struct drm_plane *plane,
} }
/* Check whether this plane supports the fb pixel format. */ /* Check whether this plane supports the fb pixel format. */
ret = drm_plane_check_pixel_format(plane, fb->pixel_format); ret = drm_plane_check_pixel_format(plane, fb->format->format);
if (ret) { if (ret) {
struct drm_format_name_buf format_name; struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("Invalid pixel format %s\n", DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format, drm_get_format_name(fb->format->format,
&format_name)); &format_name));
goto out; goto out;
} }
@ -854,7 +858,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
if (ret) if (ret)
goto out; goto out;
if (crtc->primary->fb->pixel_format != fb->pixel_format) { if (crtc->primary->fb->format != fb->format) {
DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n"); DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n");
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;

View File

@ -29,6 +29,7 @@
#include <drm/drm_rect.h> #include <drm/drm_rect.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#define SUBPIXEL_MASK 0xffff #define SUBPIXEL_MASK 0xffff
@ -74,6 +75,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
{ {
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
int count = 0; int count = 0;
/* /*
@ -83,7 +85,8 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
*/ */
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder && connector->encoder->crtc == crtc) { if (connector->encoder && connector->encoder->crtc == crtc) {
if (connector_list != NULL && count < num_connectors) if (connector_list != NULL && count < num_connectors)
*(connector_list++) = connector; *(connector_list++) = connector;
@ -91,6 +94,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
count++; count++;
} }
} }
drm_connector_list_iter_put(&conn_iter);
return count; return count;
} }

View File

@ -129,6 +129,7 @@ void drm_kms_helper_poll_enable_locked(struct drm_device *dev)
{ {
bool poll = false; bool poll = false;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
unsigned long delay = DRM_OUTPUT_POLL_PERIOD; unsigned long delay = DRM_OUTPUT_POLL_PERIOD;
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
@ -136,11 +137,13 @@ void drm_kms_helper_poll_enable_locked(struct drm_device *dev)
if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll) if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll)
return; return;
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT | if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT)) DRM_CONNECTOR_POLL_DISCONNECT))
poll = true; poll = true;
} }
drm_connector_list_iter_put(&conn_iter);
if (dev->mode_config.delayed_event) { if (dev->mode_config.delayed_event) {
poll = true; poll = true;
@ -382,6 +385,7 @@ static void output_poll_execute(struct work_struct *work)
struct delayed_work *delayed_work = to_delayed_work(work); struct delayed_work *delayed_work = to_delayed_work(work);
struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_work); struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_work);
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
enum drm_connector_status old_status; enum drm_connector_status old_status;
bool repoll = false, changed; bool repoll = false, changed;
@ -397,8 +401,8 @@ static void output_poll_execute(struct work_struct *work)
goto out; goto out;
} }
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
/* Ignore forced connectors. */ /* Ignore forced connectors. */
if (connector->force) if (connector->force)
continue; continue;
@ -451,6 +455,7 @@ static void output_poll_execute(struct work_struct *work)
changed = true; changed = true;
} }
} }
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
@ -562,6 +567,7 @@ EXPORT_SYMBOL(drm_kms_helper_poll_fini);
bool drm_helper_hpd_irq_event(struct drm_device *dev) bool drm_helper_hpd_irq_event(struct drm_device *dev)
{ {
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
enum drm_connector_status old_status; enum drm_connector_status old_status;
bool changed = false; bool changed = false;
@ -569,8 +575,8 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
return false; return false;
mutex_lock(&dev->mode_config.mutex); mutex_lock(&dev->mode_config.mutex);
drm_for_each_connector(connector, dev) { drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
/* Only handle HPD capable connectors. */ /* Only handle HPD capable connectors. */
if (!(connector->polled & DRM_CONNECTOR_POLL_HPD)) if (!(connector->polled & DRM_CONNECTOR_POLL_HPD))
continue; continue;
@ -586,7 +592,7 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
if (old_status != connector->status) if (old_status != connector->status)
changed = true; changed = true;
} }
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
if (changed) if (changed)

View File

@ -182,29 +182,10 @@ static const struct drm_plane_funcs drm_simple_kms_plane_funcs = {
int drm_simple_display_pipe_attach_bridge(struct drm_simple_display_pipe *pipe, int drm_simple_display_pipe_attach_bridge(struct drm_simple_display_pipe *pipe,
struct drm_bridge *bridge) struct drm_bridge *bridge)
{ {
bridge->encoder = &pipe->encoder; return drm_bridge_attach(&pipe->encoder, bridge, NULL);
pipe->encoder.bridge = bridge;
return drm_bridge_attach(pipe->encoder.dev, bridge);
} }
EXPORT_SYMBOL(drm_simple_display_pipe_attach_bridge); EXPORT_SYMBOL(drm_simple_display_pipe_attach_bridge);
/**
* drm_simple_display_pipe_detach_bridge - Detach the bridge from the display pipe
* @pipe: simple display pipe object
*
* Detaches the drm bridge previously attached with
* drm_simple_display_pipe_attach_bridge()
*/
void drm_simple_display_pipe_detach_bridge(struct drm_simple_display_pipe *pipe)
{
if (WARN_ON(!pipe->encoder.bridge))
return;
drm_bridge_detach(pipe->encoder.bridge);
pipe->encoder.bridge = NULL;
}
EXPORT_SYMBOL(drm_simple_display_pipe_detach_bridge);
/** /**
* drm_simple_display_pipe_init - Initialize a simple display pipeline * drm_simple_display_pipe_init - Initialize a simple display pipeline
* @dev: DRM device * @dev: DRM device

View File

@ -592,7 +592,7 @@ static void etnaviv_unbind(struct device *dev)
drm->dev_private = NULL; drm->dev_private = NULL;
kfree(priv); kfree(priv);
drm_put_dev(drm); drm_dev_unref(drm);
} }
static const struct component_master_ops etnaviv_master_ops = { static const struct component_master_ops etnaviv_master_ops = {

View File

@ -113,6 +113,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
while (1) { while (1) {
struct etnaviv_vram_mapping *m, *n; struct etnaviv_vram_mapping *m, *n;
struct drm_mm_scan scan;
struct list_head list; struct list_head list;
bool found; bool found;
@ -134,7 +135,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
} }
/* Try to retire some entries */ /* Try to retire some entries */
drm_mm_init_scan(&mmu->mm, size, 0, 0); drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0, 0);
found = 0; found = 0;
INIT_LIST_HEAD(&list); INIT_LIST_HEAD(&list);
@ -151,7 +152,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
continue; continue;
list_add(&free->scan_node, &list); list_add(&free->scan_node, &list);
if (drm_mm_scan_add_block(&free->vram_node)) { if (drm_mm_scan_add_block(&scan, &free->vram_node)) {
found = true; found = true;
break; break;
} }
@ -160,7 +161,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
if (!found) { if (!found) {
/* Nothing found, clean up and fail */ /* Nothing found, clean up and fail */
list_for_each_entry_safe(m, n, &list, scan_node) list_for_each_entry_safe(m, n, &list, scan_node)
BUG_ON(drm_mm_scan_remove_block(&m->vram_node)); BUG_ON(drm_mm_scan_remove_block(&scan, &m->vram_node));
break; break;
} }
@ -171,7 +172,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
* can leave the block pinned. * can leave the block pinned.
*/ */
list_for_each_entry_safe(m, n, &list, scan_node) list_for_each_entry_safe(m, n, &list, scan_node)
if (!drm_mm_scan_remove_block(&m->vram_node)) if (!drm_mm_scan_remove_block(&scan, &m->vram_node))
list_del_init(&m->scan_node); list_del_init(&m->scan_node);
/* /*

View File

@ -200,7 +200,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
val = readl(ctx->addr + DECON_WINCONx(win)); val = readl(ctx->addr + DECON_WINCONx(win));
val &= ~WINCONx_BPPMODE_MASK; val &= ~WINCONx_BPPMODE_MASK;
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_XRGB1555: case DRM_FORMAT_XRGB1555:
val |= WINCONx_BPPMODE_16BPP_I1555; val |= WINCONx_BPPMODE_16BPP_I1555;
val |= WINCONx_HAWSWP_F; val |= WINCONx_HAWSWP_F;
@ -226,7 +226,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
return; return;
} }
DRM_DEBUG_KMS("bpp = %u\n", fb->bits_per_pixel); DRM_DEBUG_KMS("bpp = %u\n", fb->format->cpp[0] * 8);
/* /*
* In case of exynos, setting dma-burst to 16Word causes permanent * In case of exynos, setting dma-burst to 16Word causes permanent
@ -275,7 +275,7 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
struct decon_context *ctx = crtc->ctx; struct decon_context *ctx = crtc->ctx;
struct drm_framebuffer *fb = state->base.fb; struct drm_framebuffer *fb = state->base.fb;
unsigned int win = plane->index; unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3; unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0]; unsigned int pitch = fb->pitches[0];
dma_addr_t dma_addr = exynos_drm_fb_dma_addr(fb, 0); dma_addr_t dma_addr = exynos_drm_fb_dma_addr(fb, 0);
u32 val; u32 val;

View File

@ -281,7 +281,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
val = readl(ctx->regs + WINCON(win)); val = readl(ctx->regs + WINCON(win));
val &= ~WINCONx_BPPMODE_MASK; val &= ~WINCONx_BPPMODE_MASK;
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_RGB565: case DRM_FORMAT_RGB565:
val |= WINCONx_BPPMODE_16BPP_565; val |= WINCONx_BPPMODE_16BPP_565;
val |= WINCONx_BURSTLEN_16WORD; val |= WINCONx_BURSTLEN_16WORD;
@ -330,7 +330,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
break; break;
} }
DRM_DEBUG_KMS("bpp = %d\n", fb->bits_per_pixel); DRM_DEBUG_KMS("bpp = %d\n", fb->format->cpp[0] * 8);
/* /*
* In case of exynos, setting dma-burst to 16Word causes permanent * In case of exynos, setting dma-burst to 16Word causes permanent
@ -340,7 +340,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
* movement causes unstable DMA which results into iommu crash/tear. * movement causes unstable DMA which results into iommu crash/tear.
*/ */
padding = (fb->pitches[0] / (fb->bits_per_pixel >> 3)) - fb->width; padding = (fb->pitches[0] / fb->format->cpp[0]) - fb->width;
if (fb->width + padding < MIN_FB_WIDTH_FOR_16WORD_BURST) { if (fb->width + padding < MIN_FB_WIDTH_FOR_16WORD_BURST) {
val &= ~WINCONx_BURSTLEN_MASK; val &= ~WINCONx_BURSTLEN_MASK;
val |= WINCONx_BURSTLEN_8WORD; val |= WINCONx_BURSTLEN_8WORD;
@ -407,7 +407,7 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
unsigned int last_x; unsigned int last_x;
unsigned int last_y; unsigned int last_y;
unsigned int win = plane->index; unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3; unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0]; unsigned int pitch = fb->pitches[0];
if (ctx->suspended) if (ctx->suspended)

View File

@ -99,7 +99,6 @@ static int exynos_dp_bridge_attach(struct analogix_dp_plat_data *plat_data,
struct drm_connector *connector) struct drm_connector *connector)
{ {
struct exynos_dp_device *dp = to_dp(plat_data); struct exynos_dp_device *dp = to_dp(plat_data);
struct drm_encoder *encoder = &dp->encoder;
int ret; int ret;
drm_connector_register(connector); drm_connector_register(connector);
@ -107,9 +106,7 @@ static int exynos_dp_bridge_attach(struct analogix_dp_plat_data *plat_data,
/* Pre-empt DP connector creation if there's a bridge */ /* Pre-empt DP connector creation if there's a bridge */
if (dp->ptn_bridge) { if (dp->ptn_bridge) {
bridge->next = dp->ptn_bridge; ret = drm_bridge_attach(&dp->encoder, dp->ptn_bridge, bridge);
dp->ptn_bridge->encoder = encoder;
ret = drm_bridge_attach(encoder->dev, dp->ptn_bridge);
if (ret) { if (ret) {
DRM_ERROR("Failed to attach bridge to drm\n"); DRM_ERROR("Failed to attach bridge to drm\n");
bridge->next = NULL; bridge->next = NULL;

View File

@ -1718,10 +1718,8 @@ static int exynos_dsi_bind(struct device *dev, struct device *master,
} }
bridge = of_drm_find_bridge(dsi->bridge_node); bridge = of_drm_find_bridge(dsi->bridge_node);
if (bridge) { if (bridge)
encoder->bridge = bridge; drm_bridge_attach(encoder, bridge, NULL);
drm_bridge_attach(drm_dev, bridge);
}
return mipi_dsi_host_register(&dsi->dsi_host); return mipi_dsi_host_register(&dsi->dsi_host);
} }

View File

@ -126,7 +126,7 @@ exynos_drm_framebuffer_init(struct drm_device *dev,
+ mode_cmd->offsets[i]; + mode_cmd->offsets[i];
} }
drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &exynos_fb->fb, mode_cmd);
ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs); ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
if (ret < 0) { if (ret < 0) {

View File

@ -76,7 +76,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
{ {
struct fb_info *fbi; struct fb_info *fbi;
struct drm_framebuffer *fb = helper->fb; struct drm_framebuffer *fb = helper->fb;
unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3); unsigned int size = fb->width * fb->height * fb->format->cpp[0];
unsigned int nr_pages; unsigned int nr_pages;
unsigned long offset; unsigned long offset;
@ -90,7 +90,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
fbi->flags = FBINFO_FLAG_DEFAULT; fbi->flags = FBINFO_FLAG_DEFAULT;
fbi->fbops = &exynos_drm_fb_ops; fbi->fbops = &exynos_drm_fb_ops;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height); drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
nr_pages = exynos_gem->size >> PAGE_SHIFT; nr_pages = exynos_gem->size >> PAGE_SHIFT;
@ -103,7 +103,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
return -EIO; return -EIO;
} }
offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3); offset = fbi->var.xoffset * fb->format->cpp[0];
offset += fbi->var.yoffset * fb->pitches[0]; offset += fbi->var.yoffset * fb->pitches[0];
fbi->screen_base = exynos_gem->kvaddr + offset; fbi->screen_base = exynos_gem->kvaddr + offset;

View File

@ -738,7 +738,7 @@ static void fimd_update_plane(struct exynos_drm_crtc *crtc,
unsigned long val, size, offset; unsigned long val, size, offset;
unsigned int last_x, last_y, buf_offsize, line_size; unsigned int last_x, last_y, buf_offsize, line_size;
unsigned int win = plane->index; unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3; unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0]; unsigned int pitch = fb->pitches[0];
if (ctx->suspended) if (ctx->suspended)
@ -804,7 +804,7 @@ static void fimd_update_plane(struct exynos_drm_crtc *crtc,
DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val); DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val);
} }
fimd_win_set_pixfmt(ctx, win, fb->pixel_format, state->src.w); fimd_win_set_pixfmt(ctx, win, fb->format->format, state->src.w);
/* hardware window 0 doesn't support color key. */ /* hardware window 0 doesn't support color key. */
if (win != 0) if (win != 0)

View File

@ -485,7 +485,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
bool crcb_mode = false; bool crcb_mode = false;
u32 val; u32 val;
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_NV12: case DRM_FORMAT_NV12:
crcb_mode = false; crcb_mode = false;
break; break;
@ -494,7 +494,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
break; break;
default: default:
DRM_ERROR("pixel format for vp is wrong [%d].\n", DRM_ERROR("pixel format for vp is wrong [%d].\n",
fb->pixel_format); fb->format->format);
return; return;
} }
@ -597,7 +597,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
unsigned int fmt; unsigned int fmt;
u32 val; u32 val;
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_XRGB4444: case DRM_FORMAT_XRGB4444:
case DRM_FORMAT_ARGB4444: case DRM_FORMAT_ARGB4444:
fmt = MXR_FORMAT_ARGB4444; fmt = MXR_FORMAT_ARGB4444;
@ -631,7 +631,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
/* converting dma address base and source offset */ /* converting dma address base and source offset */
dma_addr = exynos_drm_fb_dma_addr(fb, 0) dma_addr = exynos_drm_fb_dma_addr(fb, 0)
+ (state->src.x * fb->bits_per_pixel >> 3) + (state->src.x * fb->format->cpp[0])
+ (state->src.y * fb->pitches[0]); + (state->src.y * fb->pitches[0]);
src_x_offset = 0; src_x_offset = 0;
src_y_offset = 0; src_y_offset = 0;
@ -649,7 +649,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
/* setup geometry */ /* setup geometry */
mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), mixer_reg_write(res, MXR_GRAPHIC_SPAN(win),
fb->pitches[0] / (fb->bits_per_pixel >> 3)); fb->pitches[0] / fb->format->cpp[0]);
/* setup display size */ /* setup display size */
if (ctx->mxr_ver == MXR_VER_128_0_0_184 && if (ctx->mxr_ver == MXR_VER_128_0_0_184 &&
@ -681,7 +681,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
mixer_cfg_scan(ctx, mode->vdisplay); mixer_cfg_scan(ctx, mode->vdisplay);
mixer_cfg_rgb_fmt(ctx, mode->vdisplay); mixer_cfg_rgb_fmt(ctx, mode->vdisplay);
mixer_cfg_layer(ctx, win, priority, true); mixer_cfg_layer(ctx, win, priority, true);
mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->pixel_format)); mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->format->format));
/* layer update mandatory for mixer 16.0.33.0 */ /* layer update mandatory for mixer 16.0.33.0 */
if (ctx->mxr_ver == MXR_VER_16_0_33_0 || if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||

View File

@ -434,7 +434,8 @@ static int fsl_dcu_drm_remove(struct platform_device *pdev)
{ {
struct fsl_dcu_drm_device *fsl_dev = platform_get_drvdata(pdev); struct fsl_dcu_drm_device *fsl_dev = platform_get_drvdata(pdev);
drm_put_dev(fsl_dev->drm); drm_dev_unregister(fsl_dev->drm);
drm_dev_unref(fsl_dev->drm);
clk_disable_unprepare(fsl_dev->clk); clk_disable_unprepare(fsl_dev->clk);
clk_unregister(fsl_dev->pix_clk); clk_unregister(fsl_dev->pix_clk);

View File

@ -12,6 +12,8 @@
#ifndef __FSL_DCU_DRM_DRV_H__ #ifndef __FSL_DCU_DRM_DRV_H__
#define __FSL_DCU_DRM_DRV_H__ #define __FSL_DCU_DRM_DRV_H__
#include <drm/drm_encoder.h>
#include "fsl_dcu_drm_crtc.h" #include "fsl_dcu_drm_crtc.h"
#include "fsl_dcu_drm_output.h" #include "fsl_dcu_drm_output.h"
#include "fsl_dcu_drm_plane.h" #include "fsl_dcu_drm_plane.h"

View File

@ -44,7 +44,7 @@ static int fsl_dcu_drm_plane_atomic_check(struct drm_plane *plane,
if (!state->fb || !state->crtc) if (!state->fb || !state->crtc)
return 0; return 0;
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_RGB565: case DRM_FORMAT_RGB565:
case DRM_FORMAT_RGB888: case DRM_FORMAT_RGB888:
case DRM_FORMAT_XRGB8888: case DRM_FORMAT_XRGB8888:
@ -96,7 +96,7 @@ static void fsl_dcu_drm_plane_atomic_update(struct drm_plane *plane,
gem = drm_fb_cma_get_gem_obj(fb, 0); gem = drm_fb_cma_get_gem_obj(fb, 0);
switch (fb->pixel_format) { switch (fb->format->format) {
case DRM_FORMAT_RGB565: case DRM_FORMAT_RGB565:
bpp = FSL_DCU_RGB565; bpp = FSL_DCU_RGB565;
break; break;

View File

@ -160,10 +160,7 @@ static int fsl_dcu_attach_endpoint(struct fsl_dcu_drm_device *fsl_dev,
if (!bridge) if (!bridge)
return -ENODEV; return -ENODEV;
fsl_dev->encoder.bridge = bridge; return drm_bridge_attach(&fsl_dev->encoder, bridge, NULL);
bridge->encoder = &fsl_dev->encoder;
return drm_bridge_attach(fsl_dev->drm, bridge);
} }
int fsl_dcu_create_outputs(struct fsl_dcu_drm_device *fsl_dev) int fsl_dcu_create_outputs(struct fsl_dcu_drm_device *fsl_dev)

View File

@ -254,7 +254,7 @@ static void psbfb_copyarea_accel(struct fb_info *info,
offset = psbfb->gtt->offset; offset = psbfb->gtt->offset;
stride = fb->pitches[0]; stride = fb->pitches[0];
switch (fb->depth) { switch (fb->format->depth) {
case 8: case 8:
src_format = PSB_2D_SRC_332RGB; src_format = PSB_2D_SRC_332RGB;
dst_format = PSB_2D_DST_332RGB; dst_format = PSB_2D_DST_332RGB;

View File

@ -77,7 +77,7 @@ static int psbfb_setcolreg(unsigned regno, unsigned red, unsigned green,
(transp << info->var.transp.offset); (transp << info->var.transp.offset);
if (regno < 16) { if (regno < 16) {
switch (fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 16: case 16:
((uint32_t *) info->pseudo_palette)[regno] = v; ((uint32_t *) info->pseudo_palette)[regno] = v;
break; break;
@ -244,7 +244,7 @@ static int psb_framebuffer_init(struct drm_device *dev,
if (mode_cmd->pitches[0] & 63) if (mode_cmd->pitches[0] & 63)
return -EINVAL; return -EINVAL;
drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &fb->base, mode_cmd);
fb->gtt = gt; fb->gtt = gt;
ret = drm_framebuffer_init(dev, &fb->base, &psb_fb_funcs); ret = drm_framebuffer_init(dev, &fb->base, &psb_fb_funcs);
if (ret) { if (ret) {
@ -407,7 +407,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
fbdev->psb_fb_helper.fb = fb; fbdev->psb_fb_helper.fb = fb;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
strcpy(info->fix.id, "psbdrmfb"); strcpy(info->fix.id, "psbdrmfb");
info->flags = FBINFO_DEFAULT; info->flags = FBINFO_DEFAULT;

View File

@ -59,7 +59,8 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private; struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc); struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); struct drm_framebuffer *fb = crtc->primary->fb;
struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe; int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe]; const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset; unsigned long start, offset;
@ -70,7 +71,7 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
return 0; return 0;
/* no fb bound */ /* no fb bound */
if (!crtc->primary->fb) { if (!fb) {
dev_err(dev->dev, "No FB bound\n"); dev_err(dev->dev, "No FB bound\n");
goto gma_pipe_cleaner; goto gma_pipe_cleaner;
} }
@ -81,19 +82,19 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
if (ret < 0) if (ret < 0)
goto gma_pipe_set_base_exit; goto gma_pipe_set_base_exit;
start = psbfb->gtt->offset; start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr); dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
dspcntr |= DISPPLANE_8BPP; dspcntr |= DISPPLANE_8BPP;
break; break;
case 16: case 16:
if (crtc->primary->fb->depth == 15) if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP; dspcntr |= DISPPLANE_15_16BPP;
else else
dspcntr |= DISPPLANE_16BPP; dspcntr |= DISPPLANE_16BPP;

View File

@ -148,7 +148,7 @@ static int check_fb(struct drm_framebuffer *fb)
if (!fb) if (!fb)
return 0; return 0;
switch (fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
case 16: case 16:
case 24: case 24:
@ -165,8 +165,9 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
{ {
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private; struct drm_psb_private *dev_priv = dev->dev_private;
struct drm_framebuffer *fb = crtc->primary->fb;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc); struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe; int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe]; const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset; unsigned long start, offset;
@ -178,12 +179,12 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
dev_dbg(dev->dev, "pipe = 0x%x.\n", pipe); dev_dbg(dev->dev, "pipe = 0x%x.\n", pipe);
/* no fb bound */ /* no fb bound */
if (!crtc->primary->fb) { if (!fb) {
dev_dbg(dev->dev, "No FB bound\n"); dev_dbg(dev->dev, "No FB bound\n");
return 0; return 0;
} }
ret = check_fb(crtc->primary->fb); ret = check_fb(fb);
if (ret) if (ret)
return ret; return ret;
@ -196,18 +197,18 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
return 0; return 0;
start = psbfb->gtt->offset; start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr); dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
dspcntr |= DISPPLANE_8BPP; dspcntr |= DISPPLANE_8BPP;
break; break;
case 16: case 16:
if (crtc->primary->fb->depth == 15) if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP; dspcntr |= DISPPLANE_15_16BPP;
else else
dspcntr |= DISPPLANE_16BPP; dspcntr |= DISPPLANE_16BPP;

View File

@ -599,7 +599,8 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private; struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc); struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb); struct drm_framebuffer *fb = crtc->primary->fb;
struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe; int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe]; const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset; unsigned long start, offset;
@ -608,7 +609,7 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
int ret = 0; int ret = 0;
/* no fb bound */ /* no fb bound */
if (!crtc->primary->fb) { if (!fb) {
dev_dbg(dev->dev, "No FB bound\n"); dev_dbg(dev->dev, "No FB bound\n");
return 0; return 0;
} }
@ -617,19 +618,19 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
return 0; return 0;
start = psbfb->gtt->offset; start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8); offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]); REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr); dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK; dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) { switch (fb->format->cpp[0] * 8) {
case 8: case 8:
dspcntr |= DISPPLANE_8BPP; dspcntr |= DISPPLANE_8BPP;
break; break;
case 16: case 16:
if (crtc->primary->fb->depth == 15) if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP; dspcntr |= DISPPLANE_15_16BPP;
else else
dspcntr |= DISPPLANE_16BPP; dspcntr |= DISPPLANE_16BPP;

View File

@ -23,6 +23,7 @@
#include <linux/i2c-algo-bit.h> #include <linux/i2c-algo-bit.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <linux/gpio.h> #include <linux/gpio.h>
#include "gma_display.h" #include "gma_display.h"

View File

@ -122,11 +122,11 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS); writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS);
reg = state->fb->width * (state->fb->bits_per_pixel / 8); reg = state->fb->width * (state->fb->format->cpp[0]);
/* now line_pad is 16 */ /* now line_pad is 16 */
reg = PADDING(16, reg); reg = PADDING(16, reg);
line_l = state->fb->width * state->fb->bits_per_pixel / 8; line_l = state->fb->width * state->fb->format->cpp[0];
line_l = PADDING(16, line_l); line_l = PADDING(16, line_l);
writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) | writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) |
HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l), HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l),
@ -136,7 +136,7 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
reg = readl(priv->mmio + HIBMC_CRT_DISP_CTL); reg = readl(priv->mmio + HIBMC_CRT_DISP_CTL);
reg &= ~HIBMC_CRT_DISP_CTL_FORMAT_MASK; reg &= ~HIBMC_CRT_DISP_CTL_FORMAT_MASK;
reg |= HIBMC_FIELD(HIBMC_CRT_DISP_CTL_FORMAT, reg |= HIBMC_FIELD(HIBMC_CRT_DISP_CTL_FORMAT,
state->fb->bits_per_pixel / 16); state->fb->format->cpp[0] * 8 / 16);
writel(reg, priv->mmio + HIBMC_CRT_DISP_CTL); writel(reg, priv->mmio + HIBMC_CRT_DISP_CTL);
} }

View File

@ -135,7 +135,7 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
info->fbops = &hibmc_drm_fb_ops; info->fbops = &hibmc_drm_fb_ops;
drm_fb_helper_fill_fix(info, hi_fbdev->fb->fb.pitches[0], drm_fb_helper_fill_fix(info, hi_fbdev->fb->fb.pitches[0],
hi_fbdev->fb->fb.depth); hi_fbdev->fb->fb.format->depth);
drm_fb_helper_fill_var(info, &priv->fbdev->helper, sizes->fb_width, drm_fb_helper_fill_var(info, &priv->fbdev->helper, sizes->fb_width,
sizes->fb_height); sizes->fb_height);

View File

@ -512,7 +512,7 @@ hibmc_framebuffer_init(struct drm_device *dev,
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }
drm_helper_mode_fill_fb_struct(&hibmc_fb->fb, mode_cmd); drm_helper_mode_fill_fb_struct(dev, &hibmc_fb->fb, mode_cmd);
hibmc_fb->obj = obj; hibmc_fb->obj = obj;
ret = drm_framebuffer_init(dev, &hibmc_fb->fb, &hibmc_fb_funcs); ret = drm_framebuffer_init(dev, &hibmc_fb->fb, &hibmc_fb_funcs);
if (ret) { if (ret) {

View File

@ -709,10 +709,7 @@ static int dsi_bridge_init(struct drm_device *dev, struct dw_dsi *dsi)
int ret; int ret;
/* associate the bridge to dsi encoder */ /* associate the bridge to dsi encoder */
encoder->bridge = bridge; ret = drm_bridge_attach(encoder, bridge, NULL);
bridge->encoder = encoder;
ret = drm_bridge_attach(dev, bridge);
if (ret) { if (ret) {
DRM_ERROR("failed to attach external bridge\n"); DRM_ERROR("failed to attach external bridge\n");
return ret; return ret;

View File

@ -617,7 +617,7 @@ static void ade_rdma_set(void __iomem *base, struct drm_framebuffer *fb,
ch + 1, y, in_h, stride, (u32)obj->paddr); ch + 1, y, in_h, stride, (u32)obj->paddr);
DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%s)\n", DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%s)\n",
addr, fb->width, fb->height, fmt, addr, fb->width, fb->height, fmt,
drm_get_format_name(fb->pixel_format, &format_name)); drm_get_format_name(fb->format->format, &format_name));
/* get reg offset */ /* get reg offset */
reg_ctrl = RD_CH_CTRL(ch); reg_ctrl = RD_CH_CTRL(ch);
@ -773,7 +773,7 @@ static void ade_update_channel(struct ade_plane *aplane,
{ {
struct ade_hw_ctx *ctx = aplane->ctx; struct ade_hw_ctx *ctx = aplane->ctx;
void __iomem *base = ctx->base; void __iomem *base = ctx->base;
u32 fmt = ade_get_format(fb->pixel_format); u32 fmt = ade_get_format(fb->format->format);
u32 ch = aplane->ch; u32 ch = aplane->ch;
u32 in_w; u32 in_w;
u32 in_h; u32 in_h;
@ -835,7 +835,7 @@ static int ade_plane_atomic_check(struct drm_plane *plane,
if (!crtc || !fb) if (!crtc || !fb)
return 0; return 0;
fmt = ade_get_format(fb->pixel_format); fmt = ade_get_format(fb->format->format);
if (fmt == ADE_FORMAT_UNSUPPORT) if (fmt == ADE_FORMAT_UNSUPPORT)
return -EINVAL; return -EINVAL;
@ -973,9 +973,9 @@ static int ade_dts_parse(struct platform_device *pdev, struct ade_hw_ctx *ctx)
return 0; return 0;
} }
static int ade_drm_init(struct drm_device *dev) static int ade_drm_init(struct platform_device *pdev)
{ {
struct platform_device *pdev = dev->platformdev; struct drm_device *dev = platform_get_drvdata(pdev);
struct ade_data *ade; struct ade_data *ade;
struct ade_hw_ctx *ctx; struct ade_hw_ctx *ctx;
struct ade_crtc *acrtc; struct ade_crtc *acrtc;
@ -1034,13 +1034,8 @@ static int ade_drm_init(struct drm_device *dev)
return 0; return 0;
} }
static void ade_drm_cleanup(struct drm_device *dev) static void ade_drm_cleanup(struct platform_device *pdev)
{ {
struct platform_device *pdev = dev->platformdev;
struct ade_data *ade = platform_get_drvdata(pdev);
struct drm_crtc *crtc = &ade->acrtc.base;
drm_crtc_cleanup(crtc);
} }
const struct kirin_dc_ops ade_dc_ops = { const struct kirin_dc_ops ade_dc_ops = {

View File

@ -42,7 +42,7 @@ static int kirin_drm_kms_cleanup(struct drm_device *dev)
#endif #endif
drm_kms_helper_poll_fini(dev); drm_kms_helper_poll_fini(dev);
drm_vblank_cleanup(dev); drm_vblank_cleanup(dev);
dc_ops->cleanup(dev); dc_ops->cleanup(to_platform_device(dev->dev));
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
devm_kfree(dev->dev, priv); devm_kfree(dev->dev, priv);
dev->dev_private = NULL; dev->dev_private = NULL;
@ -104,7 +104,7 @@ static int kirin_drm_kms_init(struct drm_device *dev)
kirin_drm_mode_config_init(dev); kirin_drm_mode_config_init(dev);
/* display controller init */ /* display controller init */
ret = dc_ops->init(dev); ret = dc_ops->init(to_platform_device(dev->dev));
if (ret) if (ret)
goto err_mode_config_cleanup; goto err_mode_config_cleanup;
@ -138,7 +138,7 @@ static int kirin_drm_kms_init(struct drm_device *dev)
err_unbind_all: err_unbind_all:
component_unbind_all(dev->dev, dev); component_unbind_all(dev->dev, dev);
err_dc_cleanup: err_dc_cleanup:
dc_ops->cleanup(dev); dc_ops->cleanup(to_platform_device(dev->dev));
err_mode_config_cleanup: err_mode_config_cleanup:
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
devm_kfree(dev->dev, priv); devm_kfree(dev->dev, priv);
@ -209,8 +209,6 @@ static int kirin_drm_bind(struct device *dev)
if (IS_ERR(drm_dev)) if (IS_ERR(drm_dev))
return PTR_ERR(drm_dev); return PTR_ERR(drm_dev);
drm_dev->platformdev = to_platform_device(dev);
ret = kirin_drm_kms_init(drm_dev); ret = kirin_drm_kms_init(drm_dev);
if (ret) if (ret)
goto err_drm_dev_unref; goto err_drm_dev_unref;

View File

@ -15,8 +15,8 @@
/* display controller init/cleanup ops */ /* display controller init/cleanup ops */
struct kirin_dc_ops { struct kirin_dc_ops {
int (*init)(struct drm_device *dev); int (*init)(struct platform_device *pdev);
void (*cleanup)(struct drm_device *dev); void (*cleanup)(struct platform_device *pdev);
}; };
struct kirin_drm_private { struct kirin_drm_private {

View File

@ -1869,8 +1869,8 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ", seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ",
fbdev_fb->base.width, fbdev_fb->base.width,
fbdev_fb->base.height, fbdev_fb->base.height,
fbdev_fb->base.depth, fbdev_fb->base.format->depth,
fbdev_fb->base.bits_per_pixel, fbdev_fb->base.format->cpp[0] * 8,
fbdev_fb->base.modifier, fbdev_fb->base.modifier,
drm_framebuffer_read_refcount(&fbdev_fb->base)); drm_framebuffer_read_refcount(&fbdev_fb->base));
describe_obj(m, fbdev_fb->obj); describe_obj(m, fbdev_fb->obj);
@ -1887,8 +1887,8 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
seq_printf(m, "user size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ", seq_printf(m, "user size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ",
fb->base.width, fb->base.width,
fb->base.height, fb->base.height,
fb->base.depth, fb->base.format->depth,
fb->base.bits_per_pixel, fb->base.format->cpp[0] * 8,
fb->base.modifier, fb->base.modifier,
drm_framebuffer_read_refcount(&fb->base)); drm_framebuffer_read_refcount(&fb->base));
describe_obj(m, fb->obj); describe_obj(m, fb->obj);
@ -3031,7 +3031,8 @@ static void intel_plane_info(struct seq_file *m, struct intel_crtc *intel_crtc)
state = plane->state; state = plane->state;
if (state->fb) { if (state->fb) {
drm_get_format_name(state->fb->pixel_format, &format_name); drm_get_format_name(state->fb->format->format,
&format_name);
} else { } else {
sprintf(format_name.str, "N/A"); sprintf(format_name.str, "N/A");
} }

View File

@ -1094,7 +1094,7 @@ struct intel_fbc {
struct { struct {
u64 ilk_ggtt_offset; u64 ilk_ggtt_offset;
uint32_t pixel_format; const struct drm_format_info *format;
unsigned int stride; unsigned int stride;
int fence_reg; int fence_reg;
unsigned int tiling_mode; unsigned int tiling_mode;
@ -1110,7 +1110,7 @@ struct intel_fbc {
struct { struct {
u64 ggtt_offset; u64 ggtt_offset;
uint32_t pixel_format; const struct drm_format_info *format;
unsigned int stride; unsigned int stride;
int fence_reg; int fence_reg;
} fb; } fb;

View File

@ -51,7 +51,10 @@ static bool ggtt_is_idle(struct drm_i915_private *dev_priv)
} }
static bool static bool
mark_free(struct i915_vma *vma, unsigned int flags, struct list_head *unwind) mark_free(struct drm_mm_scan *scan,
struct i915_vma *vma,
unsigned int flags,
struct list_head *unwind)
{ {
if (i915_vma_is_pinned(vma)) if (i915_vma_is_pinned(vma))
return false; return false;
@ -63,7 +66,7 @@ mark_free(struct i915_vma *vma, unsigned int flags, struct list_head *unwind)
return false; return false;
list_add(&vma->exec_list, unwind); list_add(&vma->exec_list, unwind);
return drm_mm_scan_add_block(&vma->node); return drm_mm_scan_add_block(scan, &vma->node);
} }
/** /**
@ -97,6 +100,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
unsigned flags) unsigned flags)
{ {
struct drm_i915_private *dev_priv = vm->i915; struct drm_i915_private *dev_priv = vm->i915;
struct drm_mm_scan scan;
struct list_head eviction_list; struct list_head eviction_list;
struct list_head *phases[] = { struct list_head *phases[] = {
&vm->inactive_list, &vm->inactive_list,
@ -104,6 +108,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
NULL, NULL,
}, **phase; }, **phase;
struct i915_vma *vma, *next; struct i915_vma *vma, *next;
struct drm_mm_node *node;
int ret; int ret;
lockdep_assert_held(&vm->i915->drm.struct_mutex); lockdep_assert_held(&vm->i915->drm.struct_mutex);
@ -122,12 +127,10 @@ i915_gem_evict_something(struct i915_address_space *vm,
* On each list, the oldest objects lie at the HEAD with the freshest * On each list, the oldest objects lie at the HEAD with the freshest
* object on the TAIL. * object on the TAIL.
*/ */
if (start != 0 || end != vm->total) { drm_mm_scan_init_with_range(&scan, &vm->mm,
drm_mm_init_scan_with_range(&vm->mm, min_size, min_size, alignment, cache_level,
alignment, cache_level, start, end,
start, end); flags & PIN_HIGH ? DRM_MM_CREATE_TOP : 0);
} else
drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level);
/* Retire before we search the active list. Although we have /* Retire before we search the active list. Although we have
* reasonable accuracy in our retirement lists, we may have * reasonable accuracy in our retirement lists, we may have
@ -144,13 +147,13 @@ i915_gem_evict_something(struct i915_address_space *vm,
phase = phases; phase = phases;
do { do {
list_for_each_entry(vma, *phase, vm_link) list_for_each_entry(vma, *phase, vm_link)
if (mark_free(vma, flags, &eviction_list)) if (mark_free(&scan, vma, flags, &eviction_list))
goto found; goto found;
} while (*++phase); } while (*++phase);
/* Nothing found, clean up and bail out! */ /* Nothing found, clean up and bail out! */
list_for_each_entry_safe(vma, next, &eviction_list, exec_list) { list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
ret = drm_mm_scan_remove_block(&vma->node); ret = drm_mm_scan_remove_block(&scan, &vma->node);
BUG_ON(ret); BUG_ON(ret);
INIT_LIST_HEAD(&vma->exec_list); INIT_LIST_HEAD(&vma->exec_list);
@ -199,7 +202,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
* of any of our objects, thus corrupting the list). * of any of our objects, thus corrupting the list).
*/ */
list_for_each_entry_safe(vma, next, &eviction_list, exec_list) { list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
if (drm_mm_scan_remove_block(&vma->node)) if (drm_mm_scan_remove_block(&scan, &vma->node))
__i915_vma_pin(vma); __i915_vma_pin(vma);
else else
list_del_init(&vma->exec_list); list_del_init(&vma->exec_list);
@ -216,6 +219,12 @@ i915_gem_evict_something(struct i915_address_space *vm,
if (ret == 0) if (ret == 0)
ret = i915_vma_unbind(vma); ret = i915_vma_unbind(vma);
} }
while (ret == 0 && (node = drm_mm_scan_color_evict(&scan))) {
vma = container_of(node, struct i915_vma, node);
ret = i915_vma_unbind(vma);
}
return ret; return ret;
} }

View File

@ -2703,7 +2703,7 @@ void i915_gem_gtt_finish_pages(struct drm_i915_gem_object *obj,
dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL); dma_unmap_sg(kdev, pages->sgl, pages->nents, PCI_DMA_BIDIRECTIONAL);
} }
static void i915_gtt_color_adjust(struct drm_mm_node *node, static void i915_gtt_color_adjust(const struct drm_mm_node *node,
unsigned long color, unsigned long color,
u64 *start, u64 *start,
u64 *end) u64 *end)
@ -2711,10 +2711,8 @@ static void i915_gtt_color_adjust(struct drm_mm_node *node,
if (node->color != color) if (node->color != color)
*start += 4096; *start += 4096;
node = list_first_entry_or_null(&node->node_list, node = list_next_entry(node, node_list);
struct drm_mm_node, if (node->allocated && node->color != color)
node_list);
if (node && node->allocated && node->color != color)
*end -= 4096; *end -= 4096;
} }

Some files were not shown because too many files have changed in this diff Show More