linux_dsm_epyc7002/drivers/gpu/drm/xen/xen_drm_front.c
Oleksandr Andrushchenko c575b7eeb8 drm/xen-front: Add support for Xen PV display frontend
Add support for Xen para-virtualized frontend display driver.
Accompanying backend [1] is implemented as a user-space application
and its helper library [2], capable of running as a Weston client
or DRM master.
Configuration of both backend and frontend is done via
Xen guest domain configuration options [3].

Driver limitations:
 1. Only primary plane without additional properties is supported.
 2. Only one video mode supported which resolution is configured
    via XenStore.
 3. All CRTCs operate at fixed frequency of 60Hz.

1. Implement Xen bus state machine for the frontend driver according to
the state diagram and recovery flow from display para-virtualized
protocol: xen/interface/io/displif.h.

2. Read configuration values from Xen store according
to xen/interface/io/displif.h protocol:
  - read connector(s) configuration
  - read buffer allocation mode (backend/frontend)

3. Handle Xen event channels:
  - create for all configured connectors and publish
    corresponding ring references and event channels in Xen store,
    so backend can connect
  - implement event channels interrupt handlers
  - create and destroy event channels with respect to Xen bus state

4. Implement shared buffer handling according to the
para-virtualized display device protocol at xen/interface/io/displif.h:
  - handle page directories according to displif protocol:
    - allocate and share page directories
    - grant references to the required set of pages for the
      page directory
  - allocate xen balllooned pages via Xen balloon driver
    with alloc_xenballooned_pages/free_xenballooned_pages
  - grant references to the required set of pages for the
    shared buffer itself
  - implement pages map/unmap for the buffers allocated by the
    backend (gnttab_map_refs/gnttab_unmap_refs)

5. Implement kernel modesetiing/connector handling using
DRM simple KMS helper pipeline:

- implement KMS part of the driver with the help of DRM
  simple pipepline helper which is possible due to the fact
  that the para-virtualized driver only supports a single
  (primary) plane:
  - initialize connectors according to XenStore configuration
  - handle frame done events from the backend
  - create and destroy frame buffers and propagate those
    to the backend
  - propagate set/reset mode configuration to the backend on display
    enable/disable callbacks
  - send page flip request to the backend and implement logic for
    reporting backend IO errors on prepare fb callback

- implement virtual connector handling:
  - support only pixel formats suitable for single plane modes
  - make sure the connector is always connected
  - support a single video mode as per para-virtualized driver
    configuration

6. Implement GEM handling depending on driver mode of operation:
depending on the requirements for the para-virtualized environment,
namely requirements dictated by the accompanying DRM/(v)GPU drivers
running in both host and guest environments, number of operating
modes of para-virtualized display driver are supported:
 - display buffers can be allocated by either
   frontend driver or backend
 - display buffers can be allocated to be contiguous
   in memory or not

Note! Frontend driver itself has no dependency on contiguous memory for
its operation.

6.1. Buffers allocated by the frontend driver.

The below modes of operation are configured at compile-time via
frontend driver's kernel configuration.

6.1.1. Front driver configured to use GEM CMA helpers
     This use-case is useful when used with accompanying DRM/vGPU driver
     in guest domain which was designed to only work with contiguous
     buffers, e.g. DRM driver based on GEM CMA helpers: such drivers can
     only import contiguous PRIME buffers, thus requiring frontend driver
     to provide such. In order to implement this mode of operation
     para-virtualized frontend driver can be configured to use
     GEM CMA helpers.

6.1.2. Front driver doesn't use GEM CMA
     If accompanying drivers can cope with non-contiguous memory then, to
     lower pressure on CMA subsystem of the kernel, driver can allocate
     buffers from system memory.

Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
may require IOMMU support on the platform, so accompanying DRM/vGPU
hardware can still reach display buffer memory while importing PRIME
buffers from the frontend driver.

6.2. Buffers allocated by the backend

This mode of operation is run-time configured via guest domain
configuration through XenStore entries.

For systems which do not provide IOMMU support, but having specific
requirements for display buffers it is possible to allocate such buffers
at backend side and share those with the frontend.
For example, if host domain is 1:1 mapped and has DRM/GPU hardware
expecting physically contiguous memory, this allows implementing
zero-copying use-cases.

Note, while using this scenario the following should be considered:
  a) If guest domain dies then pages/grants received from the backend
     cannot be claimed back
  b) Misbehaving guest may send too many requests to the
     backend exhausting its grant references and memory
     (consider this from security POV).

Note! Configuration options 1.1 (contiguous display buffers) and 2
(backend allocated buffers) are not supported at the same time.

7. Handle communication with the backend:
 - send requests and wait for the responses according
   to the displif protocol
 - serialize access to the communication channel
 - time-out used for backend communication is set to 3000 ms
 - manage display buffers shared with the backend

[1] https://github.com/xen-troops/displ_be
[2] https://github.com/xen-troops/libxenbe
[3] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180403112317.28751-2-andr2000@gmail.com
2018-04-03 14:41:48 +03:00

883 lines
23 KiB
C

// SPDX-License-Identifier: GPL-2.0 OR MIT
/*
* Xen para-virtual DRM device
*
* Copyright (C) 2016-2018 EPAM Systems Inc.
*
* Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
*/
#include <drm/drmP.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_gem.h>
#include <drm/drm_gem_cma_helper.h>
#include <linux/of_device.h>
#include <xen/platform_pci.h>
#include <xen/xen.h>
#include <xen/xenbus.h>
#include <xen/interface/io/displif.h>
#include "xen_drm_front.h"
#include "xen_drm_front_cfg.h"
#include "xen_drm_front_evtchnl.h"
#include "xen_drm_front_gem.h"
#include "xen_drm_front_kms.h"
#include "xen_drm_front_shbuf.h"
struct xen_drm_front_dbuf {
struct list_head list;
u64 dbuf_cookie;
u64 fb_cookie;
struct xen_drm_front_shbuf *shbuf;
};
static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
struct xen_drm_front_shbuf *shbuf, u64 dbuf_cookie)
{
struct xen_drm_front_dbuf *dbuf;
dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
if (!dbuf)
return -ENOMEM;
dbuf->dbuf_cookie = dbuf_cookie;
dbuf->shbuf = shbuf;
list_add(&dbuf->list, &front_info->dbuf_list);
return 0;
}
static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
u64 dbuf_cookie)
{
struct xen_drm_front_dbuf *buf, *q;
list_for_each_entry_safe(buf, q, dbuf_list, list)
if (buf->dbuf_cookie == dbuf_cookie)
return buf;
return NULL;
}
static void dbuf_flush_fb(struct list_head *dbuf_list, u64 fb_cookie)
{
struct xen_drm_front_dbuf *buf, *q;
list_for_each_entry_safe(buf, q, dbuf_list, list)
if (buf->fb_cookie == fb_cookie)
xen_drm_front_shbuf_flush(buf->shbuf);
}
static void dbuf_free(struct list_head *dbuf_list, u64 dbuf_cookie)
{
struct xen_drm_front_dbuf *buf, *q;
list_for_each_entry_safe(buf, q, dbuf_list, list)
if (buf->dbuf_cookie == dbuf_cookie) {
list_del(&buf->list);
xen_drm_front_shbuf_unmap(buf->shbuf);
xen_drm_front_shbuf_free(buf->shbuf);
kfree(buf);
break;
}
}
static void dbuf_free_all(struct list_head *dbuf_list)
{
struct xen_drm_front_dbuf *buf, *q;
list_for_each_entry_safe(buf, q, dbuf_list, list) {
list_del(&buf->list);
xen_drm_front_shbuf_unmap(buf->shbuf);
xen_drm_front_shbuf_free(buf->shbuf);
kfree(buf);
}
}
static struct xendispl_req *
be_prepare_req(struct xen_drm_front_evtchnl *evtchnl, u8 operation)
{
struct xendispl_req *req;
req = RING_GET_REQUEST(&evtchnl->u.req.ring,
evtchnl->u.req.ring.req_prod_pvt);
req->operation = operation;
req->id = evtchnl->evt_next_id++;
evtchnl->evt_id = req->id;
return req;
}
static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
struct xendispl_req *req)
{
reinit_completion(&evtchnl->u.req.completion);
if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
return -EIO;
xen_drm_front_evtchnl_flush(evtchnl);
return 0;
}
static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
{
if (wait_for_completion_timeout(&evtchnl->u.req.completion,
msecs_to_jiffies(XEN_DRM_FRONT_WAIT_BACK_MS)) <= 0)
return -ETIMEDOUT;
return evtchnl->u.req.resp_status;
}
int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline,
u32 x, u32 y, u32 width, u32 height,
u32 bpp, u64 fb_cookie)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xen_drm_front_info *front_info;
struct xendispl_req *req;
unsigned long flags;
int ret;
front_info = pipeline->drm_info->front_info;
evtchnl = &front_info->evt_pairs[pipeline->index].req;
if (unlikely(!evtchnl))
return -EIO;
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
req->op.set_config.x = x;
req->op.set_config.y = y;
req->op.set_config.width = width;
req->op.set_config.height = height;
req->op.set_config.bpp = bpp;
req->op.set_config.fb_cookie = fb_cookie;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret == 0)
ret = be_stream_wait_io(evtchnl);
mutex_unlock(&evtchnl->u.req.req_io_lock);
return ret;
}
static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
u64 dbuf_cookie, u32 width, u32 height,
u32 bpp, u64 size, struct page **pages,
struct sg_table *sgt)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xen_drm_front_shbuf *shbuf;
struct xendispl_req *req;
struct xen_drm_front_shbuf_cfg buf_cfg;
unsigned long flags;
int ret;
evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
if (unlikely(!evtchnl))
return -EIO;
memset(&buf_cfg, 0, sizeof(buf_cfg));
buf_cfg.xb_dev = front_info->xb_dev;
buf_cfg.pages = pages;
buf_cfg.size = size;
buf_cfg.sgt = sgt;
buf_cfg.be_alloc = front_info->cfg.be_alloc;
shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
if (!shbuf)
return -ENOMEM;
ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
if (ret < 0) {
xen_drm_front_shbuf_free(shbuf);
return ret;
}
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
req->op.dbuf_create.gref_directory =
xen_drm_front_shbuf_get_dir_start(shbuf);
req->op.dbuf_create.buffer_sz = size;
req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
req->op.dbuf_create.width = width;
req->op.dbuf_create.height = height;
req->op.dbuf_create.bpp = bpp;
if (buf_cfg.be_alloc)
req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret < 0)
goto fail;
ret = be_stream_wait_io(evtchnl);
if (ret < 0)
goto fail;
ret = xen_drm_front_shbuf_map(shbuf);
if (ret < 0)
goto fail;
mutex_unlock(&evtchnl->u.req.req_io_lock);
return 0;
fail:
mutex_unlock(&evtchnl->u.req.req_io_lock);
dbuf_free(&front_info->dbuf_list, dbuf_cookie);
return ret;
}
int xen_drm_front_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
u64 dbuf_cookie, u32 width, u32 height,
u32 bpp, u64 size, struct sg_table *sgt)
{
return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
bpp, size, NULL, sgt);
}
int xen_drm_front_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
u64 dbuf_cookie, u32 width, u32 height,
u32 bpp, u64 size, struct page **pages)
{
return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
bpp, size, pages, NULL);
}
static int xen_drm_front_dbuf_destroy(struct xen_drm_front_info *front_info,
u64 dbuf_cookie)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xendispl_req *req;
unsigned long flags;
bool be_alloc;
int ret;
evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
if (unlikely(!evtchnl))
return -EIO;
be_alloc = front_info->cfg.be_alloc;
/*
* For the backend allocated buffer release references now, so backend
* can free the buffer.
*/
if (be_alloc)
dbuf_free(&front_info->dbuf_list, dbuf_cookie);
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret == 0)
ret = be_stream_wait_io(evtchnl);
/*
* Do this regardless of communication status with the backend:
* if we cannot remove remote resources remove what we can locally.
*/
if (!be_alloc)
dbuf_free(&front_info->dbuf_list, dbuf_cookie);
mutex_unlock(&evtchnl->u.req.req_io_lock);
return ret;
}
int xen_drm_front_fb_attach(struct xen_drm_front_info *front_info,
u64 dbuf_cookie, u64 fb_cookie, u32 width,
u32 height, u32 pixel_format)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xen_drm_front_dbuf *buf;
struct xendispl_req *req;
unsigned long flags;
int ret;
evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
if (unlikely(!evtchnl))
return -EIO;
buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
if (!buf)
return -EINVAL;
buf->fb_cookie = fb_cookie;
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
req->op.fb_attach.dbuf_cookie = dbuf_cookie;
req->op.fb_attach.fb_cookie = fb_cookie;
req->op.fb_attach.width = width;
req->op.fb_attach.height = height;
req->op.fb_attach.pixel_format = pixel_format;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret == 0)
ret = be_stream_wait_io(evtchnl);
mutex_unlock(&evtchnl->u.req.req_io_lock);
return ret;
}
int xen_drm_front_fb_detach(struct xen_drm_front_info *front_info,
u64 fb_cookie)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xendispl_req *req;
unsigned long flags;
int ret;
evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
if (unlikely(!evtchnl))
return -EIO;
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
req->op.fb_detach.fb_cookie = fb_cookie;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret == 0)
ret = be_stream_wait_io(evtchnl);
mutex_unlock(&evtchnl->u.req.req_io_lock);
return ret;
}
int xen_drm_front_page_flip(struct xen_drm_front_info *front_info,
int conn_idx, u64 fb_cookie)
{
struct xen_drm_front_evtchnl *evtchnl;
struct xendispl_req *req;
unsigned long flags;
int ret;
if (unlikely(conn_idx >= front_info->num_evt_pairs))
return -EINVAL;
dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
evtchnl = &front_info->evt_pairs[conn_idx].req;
mutex_lock(&evtchnl->u.req.req_io_lock);
spin_lock_irqsave(&front_info->io_lock, flags);
req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
req->op.pg_flip.fb_cookie = fb_cookie;
ret = be_stream_do_io(evtchnl, req);
spin_unlock_irqrestore(&front_info->io_lock, flags);
if (ret == 0)
ret = be_stream_wait_io(evtchnl);
mutex_unlock(&evtchnl->u.req.req_io_lock);
return ret;
}
void xen_drm_front_on_frame_done(struct xen_drm_front_info *front_info,
int conn_idx, u64 fb_cookie)
{
struct xen_drm_front_drm_info *drm_info = front_info->drm_info;
if (unlikely(conn_idx >= front_info->cfg.num_connectors))
return;
xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
fb_cookie);
}
static int xen_drm_drv_dumb_create(struct drm_file *filp,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct xen_drm_front_drm_info *drm_info = dev->dev_private;
struct drm_gem_object *obj;
int ret;
/*
* Dumb creation is a two stage process: first we create a fully
* constructed GEM object which is communicated to the backend, and
* only after that we can create GEM's handle. This is done so,
* because of the possible races: once you create a handle it becomes
* immediately visible to user-space, so the latter can try accessing
* object without pages etc.
* For details also see drm_gem_handle_create
*/
args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
args->size = args->pitch * args->height;
obj = xen_drm_front_gem_create(dev, args->size);
if (IS_ERR_OR_NULL(obj)) {
ret = PTR_ERR(obj);
goto fail;
}
/*
* In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
* via DRM CMA helpers and doesn't have ->pages allocated
* (xendrm_gem_get_pages will return NULL), but instead can provide
* sg table
*/
if (xen_drm_front_gem_get_pages(obj))
ret = xen_drm_front_dbuf_create_from_pages(drm_info->front_info,
xen_drm_front_dbuf_to_cookie(obj),
args->width, args->height, args->bpp,
args->size,
xen_drm_front_gem_get_pages(obj));
else
ret = xen_drm_front_dbuf_create_from_sgt(drm_info->front_info,
xen_drm_front_dbuf_to_cookie(obj),
args->width, args->height, args->bpp,
args->size,
xen_drm_front_gem_get_sg_table(obj));
if (ret)
goto fail_backend;
/* This is the tail of GEM object creation */
ret = drm_gem_handle_create(filp, obj, &args->handle);
if (ret)
goto fail_handle;
/* Drop reference from allocate - handle holds it now */
drm_gem_object_put_unlocked(obj);
return 0;
fail_handle:
xen_drm_front_dbuf_destroy(drm_info->front_info,
xen_drm_front_dbuf_to_cookie(obj));
fail_backend:
/* drop reference from allocate */
drm_gem_object_put_unlocked(obj);
fail:
DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
return ret;
}
static void xen_drm_drv_free_object_unlocked(struct drm_gem_object *obj)
{
struct xen_drm_front_drm_info *drm_info = obj->dev->dev_private;
int idx;
if (drm_dev_enter(obj->dev, &idx)) {
xen_drm_front_dbuf_destroy(drm_info->front_info,
xen_drm_front_dbuf_to_cookie(obj));
drm_dev_exit(idx);
} else {
dbuf_free(&drm_info->front_info->dbuf_list,
xen_drm_front_dbuf_to_cookie(obj));
}
xen_drm_front_gem_free_object_unlocked(obj);
}
static void xen_drm_drv_release(struct drm_device *dev)
{
struct xen_drm_front_drm_info *drm_info = dev->dev_private;
struct xen_drm_front_info *front_info = drm_info->front_info;
xen_drm_front_kms_fini(drm_info);
drm_atomic_helper_shutdown(dev);
drm_mode_config_cleanup(dev);
drm_dev_fini(dev);
kfree(dev);
if (front_info->cfg.be_alloc)
xenbus_switch_state(front_info->xb_dev,
XenbusStateInitialising);
kfree(drm_info);
}
static const struct file_operations xen_drm_dev_fops = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
#endif
.poll = drm_poll,
.read = drm_read,
.llseek = no_llseek,
#ifdef CONFIG_DRM_XEN_FRONTEND_CMA
.mmap = drm_gem_cma_mmap,
#else
.mmap = xen_drm_front_gem_mmap,
#endif
};
static const struct vm_operations_struct xen_drm_drv_vm_ops = {
.open = drm_gem_vm_open,
.close = drm_gem_vm_close,
};
static struct drm_driver xen_drm_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET |
DRIVER_PRIME | DRIVER_ATOMIC,
.release = xen_drm_drv_release,
.gem_vm_ops = &xen_drm_drv_vm_ops,
.gem_free_object_unlocked = xen_drm_drv_free_object_unlocked,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = drm_gem_prime_import,
.gem_prime_export = drm_gem_prime_export,
.gem_prime_import_sg_table = xen_drm_front_gem_import_sg_table,
.gem_prime_get_sg_table = xen_drm_front_gem_get_sg_table,
.dumb_create = xen_drm_drv_dumb_create,
.fops = &xen_drm_dev_fops,
.name = "xendrm-du",
.desc = "Xen PV DRM Display Unit",
.date = "20180221",
.major = 1,
.minor = 0,
#ifdef CONFIG_DRM_XEN_FRONTEND_CMA
.gem_prime_vmap = drm_gem_cma_prime_vmap,
.gem_prime_vunmap = drm_gem_cma_prime_vunmap,
.gem_prime_mmap = drm_gem_cma_prime_mmap,
#else
.gem_prime_vmap = xen_drm_front_gem_prime_vmap,
.gem_prime_vunmap = xen_drm_front_gem_prime_vunmap,
.gem_prime_mmap = xen_drm_front_gem_prime_mmap,
#endif
};
static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
{
struct device *dev = &front_info->xb_dev->dev;
struct xen_drm_front_drm_info *drm_info;
struct drm_device *drm_dev;
int ret;
DRM_INFO("Creating %s\n", xen_drm_driver.desc);
drm_info = kzalloc(sizeof(*drm_info), GFP_KERNEL);
if (!drm_info) {
ret = -ENOMEM;
goto fail;
}
drm_info->front_info = front_info;
front_info->drm_info = drm_info;
drm_dev = drm_dev_alloc(&xen_drm_driver, dev);
if (!drm_dev) {
ret = -ENOMEM;
goto fail;
}
drm_info->drm_dev = drm_dev;
drm_dev->dev_private = drm_info;
ret = xen_drm_front_kms_init(drm_info);
if (ret) {
DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
goto fail_modeset;
}
ret = drm_dev_register(drm_dev, 0);
if (ret)
goto fail_register;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
xen_drm_driver.name, xen_drm_driver.major,
xen_drm_driver.minor, xen_drm_driver.patchlevel,
xen_drm_driver.date, drm_dev->primary->index);
return 0;
fail_register:
drm_dev_unregister(drm_dev);
fail_modeset:
drm_kms_helper_poll_fini(drm_dev);
drm_mode_config_cleanup(drm_dev);
fail:
kfree(drm_info);
return ret;
}
static void xen_drm_drv_fini(struct xen_drm_front_info *front_info)
{
struct xen_drm_front_drm_info *drm_info = front_info->drm_info;
struct drm_device *dev;
if (!drm_info)
return;
dev = drm_info->drm_dev;
if (!dev)
return;
/* Nothing to do if device is already unplugged */
if (drm_dev_is_unplugged(dev))
return;
drm_kms_helper_poll_fini(dev);
drm_dev_unplug(dev);
front_info->drm_info = NULL;
xen_drm_front_evtchnl_free_all(front_info);
dbuf_free_all(&front_info->dbuf_list);
/*
* If we are not using backend allocated buffers, then tell the
* backend we are ready to (re)initialize. Otherwise, wait for
* drm_driver.release.
*/
if (!front_info->cfg.be_alloc)
xenbus_switch_state(front_info->xb_dev,
XenbusStateInitialising);
}
static int displback_initwait(struct xen_drm_front_info *front_info)
{
struct xen_drm_front_cfg *cfg = &front_info->cfg;
int ret;
cfg->front_info = front_info;
ret = xen_drm_front_cfg_card(front_info, cfg);
if (ret < 0)
return ret;
DRM_INFO("Have %d conector(s)\n", cfg->num_connectors);
/* Create event channels for all connectors and publish */
ret = xen_drm_front_evtchnl_create_all(front_info);
if (ret < 0)
return ret;
return xen_drm_front_evtchnl_publish_all(front_info);
}
static int displback_connect(struct xen_drm_front_info *front_info)
{
xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
return xen_drm_drv_init(front_info);
}
static void displback_disconnect(struct xen_drm_front_info *front_info)
{
if (!front_info->drm_info)
return;
/* Tell the backend to wait until we release the DRM driver. */
xenbus_switch_state(front_info->xb_dev, XenbusStateReconfiguring);
xen_drm_drv_fini(front_info);
}
static void displback_changed(struct xenbus_device *xb_dev,
enum xenbus_state backend_state)
{
struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev);
int ret;
DRM_DEBUG("Backend state is %s, front is %s\n",
xenbus_strstate(backend_state),
xenbus_strstate(xb_dev->state));
switch (backend_state) {
case XenbusStateReconfiguring:
/* fall through */
case XenbusStateReconfigured:
/* fall through */
case XenbusStateInitialised:
break;
case XenbusStateInitialising:
if (xb_dev->state == XenbusStateReconfiguring)
break;
/* recovering after backend unexpected closure */
displback_disconnect(front_info);
break;
case XenbusStateInitWait:
if (xb_dev->state == XenbusStateReconfiguring)
break;
/* recovering after backend unexpected closure */
displback_disconnect(front_info);
if (xb_dev->state != XenbusStateInitialising)
break;
ret = displback_initwait(front_info);
if (ret < 0)
xenbus_dev_fatal(xb_dev, ret, "initializing frontend");
else
xenbus_switch_state(xb_dev, XenbusStateInitialised);
break;
case XenbusStateConnected:
if (xb_dev->state != XenbusStateInitialised)
break;
ret = displback_connect(front_info);
if (ret < 0) {
displback_disconnect(front_info);
xenbus_dev_fatal(xb_dev, ret, "connecting backend");
} else {
xenbus_switch_state(xb_dev, XenbusStateConnected);
}
break;
case XenbusStateClosing:
/*
* in this state backend starts freeing resources,
* so let it go into closed state, so we can also
* remove ours
*/
break;
case XenbusStateUnknown:
/* fall through */
case XenbusStateClosed:
if (xb_dev->state == XenbusStateClosed)
break;
displback_disconnect(front_info);
break;
}
}
static int xen_drv_probe(struct xenbus_device *xb_dev,
const struct xenbus_device_id *id)
{
struct xen_drm_front_info *front_info;
struct device *dev = &xb_dev->dev;
int ret;
/*
* The device is not spawn from a device tree, so arch_setup_dma_ops
* is not called, thus leaving the device with dummy DMA ops.
* This makes the device return error on PRIME buffer import, which
* is not correct: to fix this call of_dma_configure() with a NULL
* node to set default DMA ops.
*/
dev->bus->force_dma = true;
dev->coherent_dma_mask = DMA_BIT_MASK(32);
ret = of_dma_configure(dev, NULL);
if (ret < 0) {
DRM_ERROR("Cannot setup DMA ops, ret %d", ret);
return ret;
}
front_info = devm_kzalloc(&xb_dev->dev,
sizeof(*front_info), GFP_KERNEL);
if (!front_info)
return -ENOMEM;
front_info->xb_dev = xb_dev;
spin_lock_init(&front_info->io_lock);
INIT_LIST_HEAD(&front_info->dbuf_list);
dev_set_drvdata(&xb_dev->dev, front_info);
return xenbus_switch_state(xb_dev, XenbusStateInitialising);
}
static int xen_drv_remove(struct xenbus_device *dev)
{
struct xen_drm_front_info *front_info = dev_get_drvdata(&dev->dev);
int to = 100;
xenbus_switch_state(dev, XenbusStateClosing);
/*
* On driver removal it is disconnected from XenBus,
* so no backend state change events come via .otherend_changed
* callback. This prevents us from exiting gracefully, e.g.
* signaling the backend to free event channels, waiting for its
* state to change to XenbusStateClosed and cleaning at our end.
* Normally when front driver removed backend will finally go into
* XenbusStateInitWait state.
*
* Workaround: read backend's state manually and wait with time-out.
*/
while ((xenbus_read_unsigned(front_info->xb_dev->otherend, "state",
XenbusStateUnknown) != XenbusStateInitWait) &&
to--)
msleep(10);
if (!to) {
unsigned int state;
state = xenbus_read_unsigned(front_info->xb_dev->otherend,
"state", XenbusStateUnknown);
DRM_ERROR("Backend state is %s while removing driver\n",
xenbus_strstate(state));
}
xen_drm_drv_fini(front_info);
xenbus_frontend_closed(dev);
return 0;
}
static const struct xenbus_device_id xen_driver_ids[] = {
{ XENDISPL_DRIVER_NAME },
{ "" }
};
static struct xenbus_driver xen_driver = {
.ids = xen_driver_ids,
.probe = xen_drv_probe,
.remove = xen_drv_remove,
.otherend_changed = displback_changed,
};
static int __init xen_drv_init(void)
{
/* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
if (XEN_PAGE_SIZE != PAGE_SIZE) {
DRM_ERROR(XENDISPL_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n",
XEN_PAGE_SIZE, PAGE_SIZE);
return -ENODEV;
}
if (!xen_domain())
return -ENODEV;
if (!xen_has_pv_devices())
return -ENODEV;
DRM_INFO("Registering XEN PV " XENDISPL_DRIVER_NAME "\n");
return xenbus_register_frontend(&xen_driver);
}
static void __exit xen_drv_fini(void)
{
DRM_INFO("Unregistering XEN PV " XENDISPL_DRIVER_NAME "\n");
xenbus_unregister_driver(&xen_driver);
}
module_init(xen_drv_init);
module_exit(xen_drv_fini);
MODULE_DESCRIPTION("Xen para-virtualized display device frontend");
MODULE_LICENSE("GPL");
MODULE_ALIAS("xen:" XENDISPL_DRIVER_NAME);