2005-07-28 01:45:44 +07:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2005 Topspin Communications. All rights reserved.
|
2005-08-20 03:46:34 +07:00
|
|
|
* Copyright (c) 2005 Intel Corporation. All rights reserved.
|
2005-07-28 01:45:44 +07:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
2006-05-13 04:57:52 +07:00
|
|
|
|
|
|
|
#include <linux/completion.h>
|
2005-07-28 01:45:44 +07:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/device.h>
|
|
|
|
#include <linux/err.h>
|
|
|
|
#include <linux/poll.h>
|
2009-10-04 19:11:37 +07:00
|
|
|
#include <linux/sched.h>
|
2005-07-28 01:45:44 +07:00
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/mount.h>
|
|
|
|
#include <linux/cdev.h>
|
2005-10-18 05:33:47 +07:00
|
|
|
#include <linux/idr.h>
|
2006-01-14 05:51:39 +07:00
|
|
|
#include <linux/mutex.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
#include <asm/uaccess.h>
|
|
|
|
|
2005-10-18 05:33:47 +07:00
|
|
|
#include <rdma/ib_cm.h>
|
|
|
|
#include <rdma/ib_user_cm.h>
|
2006-06-18 10:37:27 +07:00
|
|
|
#include <rdma/ib_marshall.h>
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
MODULE_AUTHOR("Libor Michalek");
|
|
|
|
MODULE_DESCRIPTION("InfiniBand userspace Connection Manager access");
|
|
|
|
MODULE_LICENSE("Dual BSD/GPL");
|
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
struct ib_ucm_device {
|
|
|
|
int devnum;
|
2008-02-22 06:13:36 +07:00
|
|
|
struct cdev cdev;
|
|
|
|
struct device dev;
|
2005-10-18 05:37:43 +07:00
|
|
|
struct ib_device *ib_dev;
|
|
|
|
};
|
|
|
|
|
2005-10-18 05:33:47 +07:00
|
|
|
struct ib_ucm_file {
|
2006-05-26 00:03:23 +07:00
|
|
|
struct mutex file_mutex;
|
2005-10-18 05:33:47 +07:00
|
|
|
struct file *filp;
|
2005-10-18 05:37:43 +07:00
|
|
|
struct ib_ucm_device *device;
|
2005-07-28 10:38:56 +07:00
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
struct list_head ctxs;
|
|
|
|
struct list_head events;
|
2005-10-18 05:33:47 +07:00
|
|
|
wait_queue_head_t poll_wait;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ib_ucm_context {
|
|
|
|
int id;
|
2006-05-13 04:57:52 +07:00
|
|
|
struct completion comp;
|
2005-10-18 05:33:47 +07:00
|
|
|
atomic_t ref;
|
|
|
|
int events_reported;
|
|
|
|
|
|
|
|
struct ib_ucm_file *file;
|
|
|
|
struct ib_cm_id *cm_id;
|
|
|
|
__u64 uid;
|
|
|
|
|
|
|
|
struct list_head events; /* list of pending events. */
|
|
|
|
struct list_head file_list; /* member in file ctx list */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ib_ucm_event {
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct list_head file_list; /* member in file event list */
|
|
|
|
struct list_head ctx_list; /* member in ctx event list */
|
|
|
|
|
|
|
|
struct ib_cm_id *cm_id;
|
|
|
|
struct ib_ucm_event_resp resp;
|
|
|
|
void *data;
|
|
|
|
void *info;
|
|
|
|
int data_len;
|
|
|
|
int info_len;
|
|
|
|
};
|
2005-07-28 10:38:56 +07:00
|
|
|
|
2005-07-28 01:45:44 +07:00
|
|
|
enum {
|
|
|
|
IB_UCM_MAJOR = 231,
|
2005-10-18 05:37:43 +07:00
|
|
|
IB_UCM_BASE_MINOR = 224,
|
|
|
|
IB_UCM_MAX_DEVICES = 32
|
2005-07-28 01:45:44 +07:00
|
|
|
};
|
|
|
|
|
2007-07-17 11:49:35 +07:00
|
|
|
/* ib_cm and ib_user_cm modules share /sys/class/infiniband_cm */
|
|
|
|
extern struct class cm_class;
|
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
#define IB_UCM_BASE_DEV MKDEV(IB_UCM_MAJOR, IB_UCM_BASE_MINOR)
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
static void ib_ucm_add_one(struct ib_device *device);
|
|
|
|
static void ib_ucm_remove_one(struct ib_device *device);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
static struct ib_client ucm_client = {
|
|
|
|
.name = "ucm",
|
|
|
|
.add = ib_ucm_add_one,
|
|
|
|
.remove = ib_ucm_remove_one
|
|
|
|
};
|
|
|
|
|
2006-01-14 05:51:39 +07:00
|
|
|
static DEFINE_MUTEX(ctx_id_mutex);
|
2005-10-18 05:38:50 +07:00
|
|
|
static DEFINE_IDR(ctx_id_table);
|
2005-10-18 05:37:43 +07:00
|
|
|
static DECLARE_BITMAP(dev_map, IB_UCM_MAX_DEVICES);
|
2005-10-18 05:33:47 +07:00
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
static struct ib_ucm_context *ib_ucm_ctx_get(struct ib_ucm_file *file, int id)
|
2005-07-28 01:45:44 +07:00
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_lock(&ctx_id_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
ctx = idr_find(&ctx_id_table, id);
|
2005-08-20 03:46:34 +07:00
|
|
|
if (!ctx)
|
|
|
|
ctx = ERR_PTR(-ENOENT);
|
|
|
|
else if (ctx->file != file)
|
|
|
|
ctx = ERR_PTR(-EINVAL);
|
|
|
|
else
|
|
|
|
atomic_inc(&ctx->ref);
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_unlock(&ctx_id_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
return ctx;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ib_ucm_ctx_put(struct ib_ucm_context *ctx)
|
|
|
|
{
|
2005-08-20 03:46:34 +07:00
|
|
|
if (atomic_dec_and_test(&ctx->ref))
|
2006-05-13 04:57:52 +07:00
|
|
|
complete(&ctx->comp);
|
2005-08-20 03:46:34 +07:00
|
|
|
}
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
static inline int ib_ucm_new_cm_id(int event)
|
2005-08-20 03:46:34 +07:00
|
|
|
{
|
2005-09-01 23:28:03 +07:00
|
|
|
return event == IB_CM_REQ_RECEIVED || event == IB_CM_SIDR_REQ_RECEIVED;
|
|
|
|
}
|
2005-08-20 03:46:34 +07:00
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
static void ib_ucm_cleanup_events(struct ib_ucm_context *ctx)
|
|
|
|
{
|
|
|
|
struct ib_ucm_event *uevent;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&ctx->file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
list_del(&ctx->file_list);
|
|
|
|
while (!list_empty(&ctx->events)) {
|
|
|
|
|
|
|
|
uevent = list_entry(ctx->events.next,
|
|
|
|
struct ib_ucm_event, ctx_list);
|
|
|
|
list_del(&uevent->file_list);
|
|
|
|
list_del(&uevent->ctx_list);
|
2006-11-30 06:33:10 +07:00
|
|
|
mutex_unlock(&ctx->file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
/* clear incoming connections. */
|
2005-09-01 23:28:03 +07:00
|
|
|
if (ib_ucm_new_cm_id(uevent->resp.event))
|
2005-07-28 01:45:44 +07:00
|
|
|
ib_destroy_cm_id(uevent->cm_id);
|
|
|
|
|
|
|
|
kfree(uevent);
|
2006-11-30 06:33:10 +07:00
|
|
|
mutex_lock(&ctx->file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&ctx->file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file)
|
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result;
|
|
|
|
|
2005-11-02 22:23:14 +07:00
|
|
|
ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
|
2005-07-28 01:45:44 +07:00
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
atomic_set(&ctx->ref, 1);
|
2006-05-13 04:57:52 +07:00
|
|
|
init_completion(&ctx->comp);
|
2005-07-28 01:45:44 +07:00
|
|
|
ctx->file = file;
|
|
|
|
INIT_LIST_HEAD(&ctx->events);
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
do {
|
|
|
|
result = idr_pre_get(&ctx_id_table, GFP_KERNEL);
|
|
|
|
if (!result)
|
|
|
|
goto error;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_lock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
result = idr_get_new(&ctx_id_table, ctx, &ctx->id);
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_unlock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
} while (result == -EAGAIN);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
if (result)
|
|
|
|
goto error;
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
list_add_tail(&ctx->file_list, &file->ctxs);
|
2005-07-28 01:45:44 +07:00
|
|
|
return ctx;
|
2005-09-01 23:28:03 +07:00
|
|
|
|
2005-07-28 01:45:44 +07:00
|
|
|
error:
|
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
|
|
|
}
|
2005-10-18 05:37:43 +07:00
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
static void ib_ucm_event_req_get(struct ib_ucm_req_event_resp *ureq,
|
2005-07-28 01:45:44 +07:00
|
|
|
struct ib_cm_req_event_param *kreq)
|
|
|
|
{
|
|
|
|
ureq->remote_ca_guid = kreq->remote_ca_guid;
|
|
|
|
ureq->remote_qkey = kreq->remote_qkey;
|
|
|
|
ureq->remote_qpn = kreq->remote_qpn;
|
|
|
|
ureq->qp_type = kreq->qp_type;
|
|
|
|
ureq->starting_psn = kreq->starting_psn;
|
|
|
|
ureq->responder_resources = kreq->responder_resources;
|
|
|
|
ureq->initiator_depth = kreq->initiator_depth;
|
|
|
|
ureq->local_cm_response_timeout = kreq->local_cm_response_timeout;
|
|
|
|
ureq->flow_control = kreq->flow_control;
|
|
|
|
ureq->remote_cm_response_timeout = kreq->remote_cm_response_timeout;
|
|
|
|
ureq->retry_count = kreq->retry_count;
|
|
|
|
ureq->rnr_retry_count = kreq->rnr_retry_count;
|
|
|
|
ureq->srq = kreq->srq;
|
2005-10-18 05:37:43 +07:00
|
|
|
ureq->port = kreq->port;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-06-18 10:37:27 +07:00
|
|
|
ib_copy_path_rec_to_user(&ureq->primary_path, kreq->primary_path);
|
|
|
|
if (kreq->alternate_path)
|
|
|
|
ib_copy_path_rec_to_user(&ureq->alternate_path,
|
|
|
|
kreq->alternate_path);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ib_ucm_event_rep_get(struct ib_ucm_rep_event_resp *urep,
|
|
|
|
struct ib_cm_rep_event_param *krep)
|
|
|
|
{
|
|
|
|
urep->remote_ca_guid = krep->remote_ca_guid;
|
|
|
|
urep->remote_qkey = krep->remote_qkey;
|
|
|
|
urep->remote_qpn = krep->remote_qpn;
|
|
|
|
urep->starting_psn = krep->starting_psn;
|
|
|
|
urep->responder_resources = krep->responder_resources;
|
|
|
|
urep->initiator_depth = krep->initiator_depth;
|
|
|
|
urep->target_ack_delay = krep->target_ack_delay;
|
|
|
|
urep->failover_accepted = krep->failover_accepted;
|
|
|
|
urep->flow_control = krep->flow_control;
|
|
|
|
urep->rnr_retry_count = krep->rnr_retry_count;
|
|
|
|
urep->srq = krep->srq;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ib_ucm_event_sidr_rep_get(struct ib_ucm_sidr_rep_event_resp *urep,
|
|
|
|
struct ib_cm_sidr_rep_event_param *krep)
|
|
|
|
{
|
|
|
|
urep->status = krep->status;
|
|
|
|
urep->qkey = krep->qkey;
|
|
|
|
urep->qpn = krep->qpn;
|
|
|
|
};
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
static int ib_ucm_event_process(struct ib_cm_event *evt,
|
2005-07-28 01:45:44 +07:00
|
|
|
struct ib_ucm_event *uvt)
|
|
|
|
{
|
|
|
|
void *info = NULL;
|
|
|
|
|
|
|
|
switch (evt->event) {
|
|
|
|
case IB_CM_REQ_RECEIVED:
|
2005-09-01 23:28:03 +07:00
|
|
|
ib_ucm_event_req_get(&uvt->resp.u.req_resp,
|
2005-07-28 01:45:44 +07:00
|
|
|
&evt->param.req_rcvd);
|
|
|
|
uvt->data_len = IB_CM_REQ_PRIVATE_DATA_SIZE;
|
2005-08-20 03:46:34 +07:00
|
|
|
uvt->resp.present = IB_UCM_PRES_PRIMARY;
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->resp.present |= (evt->param.req_rcvd.alternate_path ?
|
|
|
|
IB_UCM_PRES_ALTERNATE : 0);
|
|
|
|
break;
|
|
|
|
case IB_CM_REP_RECEIVED:
|
|
|
|
ib_ucm_event_rep_get(&uvt->resp.u.rep_resp,
|
|
|
|
&evt->param.rep_rcvd);
|
|
|
|
uvt->data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
|
|
|
|
break;
|
|
|
|
case IB_CM_RTU_RECEIVED:
|
|
|
|
uvt->data_len = IB_CM_RTU_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->resp.u.send_status = evt->param.send_status;
|
|
|
|
break;
|
|
|
|
case IB_CM_DREQ_RECEIVED:
|
|
|
|
uvt->data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->resp.u.send_status = evt->param.send_status;
|
|
|
|
break;
|
|
|
|
case IB_CM_DREP_RECEIVED:
|
|
|
|
uvt->data_len = IB_CM_DREP_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->resp.u.send_status = evt->param.send_status;
|
|
|
|
break;
|
|
|
|
case IB_CM_MRA_RECEIVED:
|
2005-08-20 03:46:34 +07:00
|
|
|
uvt->resp.u.mra_resp.timeout =
|
|
|
|
evt->param.mra_rcvd.service_timeout;
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->data_len = IB_CM_MRA_PRIVATE_DATA_SIZE;
|
|
|
|
break;
|
|
|
|
case IB_CM_REJ_RECEIVED:
|
2005-08-20 03:46:34 +07:00
|
|
|
uvt->resp.u.rej_resp.reason = evt->param.rej_rcvd.reason;
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->info_len = evt->param.rej_rcvd.ari_length;
|
|
|
|
info = evt->param.rej_rcvd.ari;
|
|
|
|
break;
|
|
|
|
case IB_CM_LAP_RECEIVED:
|
2006-06-18 10:37:27 +07:00
|
|
|
ib_copy_path_rec_to_user(&uvt->resp.u.lap_resp.path,
|
|
|
|
evt->param.lap_rcvd.alternate_path);
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->data_len = IB_CM_LAP_PRIVATE_DATA_SIZE;
|
2005-08-20 03:46:34 +07:00
|
|
|
uvt->resp.present = IB_UCM_PRES_ALTERNATE;
|
2005-07-28 01:45:44 +07:00
|
|
|
break;
|
|
|
|
case IB_CM_APR_RECEIVED:
|
2005-08-20 03:46:34 +07:00
|
|
|
uvt->resp.u.apr_resp.status = evt->param.apr_rcvd.ap_status;
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->data_len = IB_CM_APR_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->info_len = evt->param.apr_rcvd.info_len;
|
|
|
|
info = evt->param.apr_rcvd.apr_info;
|
|
|
|
break;
|
|
|
|
case IB_CM_SIDR_REQ_RECEIVED:
|
2006-09-23 05:22:46 +07:00
|
|
|
uvt->resp.u.sidr_req_resp.pkey =
|
2005-09-01 23:28:03 +07:00
|
|
|
evt->param.sidr_req_rcvd.pkey;
|
2006-09-23 05:22:46 +07:00
|
|
|
uvt->resp.u.sidr_req_resp.port =
|
2005-10-18 05:37:43 +07:00
|
|
|
evt->param.sidr_req_rcvd.port;
|
2005-07-28 01:45:44 +07:00
|
|
|
uvt->data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE;
|
|
|
|
break;
|
|
|
|
case IB_CM_SIDR_REP_RECEIVED:
|
|
|
|
ib_ucm_event_sidr_rep_get(&uvt->resp.u.sidr_rep_resp,
|
|
|
|
&evt->param.sidr_rep_rcvd);
|
|
|
|
uvt->data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
|
|
|
|
uvt->info_len = evt->param.sidr_rep_rcvd.info_len;
|
|
|
|
info = evt->param.sidr_rep_rcvd.info;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
uvt->resp.u.send_status = evt->param.send_status;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
if (uvt->data_len) {
|
2006-10-24 03:17:21 +07:00
|
|
|
uvt->data = kmemdup(evt->private_data, uvt->data_len, GFP_KERNEL);
|
2005-08-20 03:46:34 +07:00
|
|
|
if (!uvt->data)
|
|
|
|
goto err1;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
uvt->resp.present |= IB_UCM_PRES_DATA;
|
|
|
|
}
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
if (uvt->info_len) {
|
2006-10-24 03:17:21 +07:00
|
|
|
uvt->info = kmemdup(info, uvt->info_len, GFP_KERNEL);
|
2005-08-20 03:46:34 +07:00
|
|
|
if (!uvt->info)
|
|
|
|
goto err2;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
uvt->resp.present |= IB_UCM_PRES_INFO;
|
|
|
|
}
|
|
|
|
return 0;
|
2005-08-20 03:46:34 +07:00
|
|
|
|
|
|
|
err2:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(uvt->data);
|
2005-08-20 03:46:34 +07:00
|
|
|
err1:
|
|
|
|
return -ENOMEM;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ib_ucm_event_handler(struct ib_cm_id *cm_id,
|
|
|
|
struct ib_cm_event *event)
|
|
|
|
{
|
|
|
|
struct ib_ucm_event *uevent;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result = 0;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = cm_id->context;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-11-02 22:23:14 +07:00
|
|
|
uevent = kzalloc(sizeof *uevent, GFP_KERNEL);
|
2005-08-20 03:46:34 +07:00
|
|
|
if (!uevent)
|
|
|
|
goto err1;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
uevent->ctx = ctx;
|
|
|
|
uevent->cm_id = cm_id;
|
|
|
|
uevent->resp.uid = ctx->uid;
|
|
|
|
uevent->resp.id = ctx->id;
|
2005-07-28 01:45:44 +07:00
|
|
|
uevent->resp.event = event->event;
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
result = ib_ucm_event_process(event, uevent);
|
2005-07-28 01:45:44 +07:00
|
|
|
if (result)
|
2005-08-20 03:46:34 +07:00
|
|
|
goto err2;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&ctx->file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
list_add_tail(&uevent->file_list, &ctx->file->events);
|
|
|
|
list_add_tail(&uevent->ctx_list, &ctx->events);
|
|
|
|
wake_up_interruptible(&ctx->file->poll_wait);
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&ctx->file->file_mutex);
|
2005-08-20 03:46:34 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
err2:
|
|
|
|
kfree(uevent);
|
|
|
|
err1:
|
|
|
|
/* Destroy new cm_id's */
|
2005-09-01 23:28:03 +07:00
|
|
|
return ib_ucm_new_cm_id(event->event);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_event(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_event_get cmd;
|
2005-09-01 23:28:03 +07:00
|
|
|
struct ib_ucm_event *uevent;
|
2005-07-28 01:45:44 +07:00
|
|
|
int result = 0;
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
|
|
|
|
if (out_len < sizeof(struct ib_ucm_event_resp))
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
2005-10-18 05:37:43 +07:00
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
while (list_empty(&file->events)) {
|
2007-04-06 00:51:05 +07:00
|
|
|
mutex_unlock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2007-04-06 00:51:05 +07:00
|
|
|
if (file->filp->f_flags & O_NONBLOCK)
|
|
|
|
return -EAGAIN;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2007-04-06 00:51:05 +07:00
|
|
|
if (wait_event_interruptible(file->poll_wait,
|
|
|
|
!list_empty(&file->events)))
|
|
|
|
return -ERESTARTSYS;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
uevent = list_entry(file->events.next, struct ib_ucm_event, file_list);
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
if (ib_ucm_new_cm_id(uevent->resp.event)) {
|
|
|
|
ctx = ib_ucm_ctx_alloc(file);
|
|
|
|
if (!ctx) {
|
|
|
|
result = -ENOMEM;
|
|
|
|
goto done;
|
|
|
|
}
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
ctx->cm_id = uevent->cm_id;
|
|
|
|
ctx->cm_id->context = ctx;
|
|
|
|
uevent->resp.id = ctx->id;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.response,
|
|
|
|
&uevent->resp, sizeof(uevent->resp))) {
|
|
|
|
result = -EFAULT;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (uevent->data) {
|
|
|
|
if (cmd.data_len < uevent->data_len) {
|
|
|
|
result = -ENOMEM;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.data,
|
|
|
|
uevent->data, uevent->data_len)) {
|
|
|
|
result = -EFAULT;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (uevent->info) {
|
|
|
|
if (cmd.info_len < uevent->info_len) {
|
|
|
|
result = -ENOMEM;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.info,
|
|
|
|
uevent->info, uevent->info_len)) {
|
|
|
|
result = -EFAULT;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
list_del(&uevent->file_list);
|
|
|
|
list_del(&uevent->ctx_list);
|
2005-09-01 23:28:03 +07:00
|
|
|
uevent->ctx->events_reported++;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(uevent->data);
|
|
|
|
kfree(uevent->info);
|
2005-07-28 01:45:44 +07:00
|
|
|
kfree(uevent);
|
|
|
|
done:
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_create_id(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_create_id cmd;
|
|
|
|
struct ib_ucm_create_id_resp resp;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (out_len < sizeof(resp))
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
ctx = ib_ucm_ctx_alloc(file);
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
if (!ctx)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
ctx->uid = cmd.uid;
|
2005-10-18 05:37:43 +07:00
|
|
|
ctx->cm_id = ib_create_cm_id(file->device->ib_dev,
|
|
|
|
ib_ucm_event_handler, ctx);
|
2005-08-20 03:46:34 +07:00
|
|
|
if (IS_ERR(ctx->cm_id)) {
|
|
|
|
result = PTR_ERR(ctx->cm_id);
|
2005-10-18 05:37:43 +07:00
|
|
|
goto err1;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
resp.id = ctx->id;
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.response,
|
|
|
|
&resp, sizeof(resp))) {
|
|
|
|
result = -EFAULT;
|
2005-10-18 05:37:43 +07:00
|
|
|
goto err2;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
err2:
|
|
|
|
ib_destroy_cm_id(ctx->cm_id);
|
|
|
|
err1:
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_lock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
idr_remove(&ctx_id_table, ctx->id);
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_unlock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
kfree(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_destroy_id cmd;
|
2005-09-01 23:28:03 +07:00
|
|
|
struct ib_ucm_destroy_id_resp resp;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result = 0;
|
|
|
|
|
|
|
|
if (out_len < sizeof(resp))
|
|
|
|
return -ENOSPC;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_lock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
ctx = idr_find(&ctx_id_table, cmd.id);
|
|
|
|
if (!ctx)
|
|
|
|
ctx = ERR_PTR(-ENOENT);
|
|
|
|
else if (ctx->file != file)
|
|
|
|
ctx = ERR_PTR(-EINVAL);
|
|
|
|
else
|
|
|
|
idr_remove(&ctx_id_table, ctx->id);
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_unlock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
|
|
|
|
2006-05-13 04:57:52 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
wait_for_completion(&ctx->comp);
|
2005-09-01 23:28:03 +07:00
|
|
|
|
|
|
|
/* No new events will be generated after destroying the cm_id. */
|
|
|
|
ib_destroy_cm_id(ctx->cm_id);
|
|
|
|
/* Cleanup events not yet reported to the user. */
|
|
|
|
ib_ucm_cleanup_events(ctx);
|
|
|
|
|
|
|
|
resp.events_reported = ctx->events_reported;
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.response,
|
|
|
|
&resp, sizeof(resp)))
|
|
|
|
result = -EFAULT;
|
|
|
|
|
|
|
|
kfree(ctx);
|
|
|
|
return result;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_attr_id_resp resp;
|
|
|
|
struct ib_ucm_attr_id cmd;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result = 0;
|
|
|
|
|
|
|
|
if (out_len < sizeof(resp))
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
resp.service_id = ctx->cm_id->service_id;
|
|
|
|
resp.service_mask = ctx->cm_id->service_mask;
|
|
|
|
resp.local_id = ctx->cm_id->local_id;
|
|
|
|
resp.remote_id = ctx->cm_id->remote_id;
|
|
|
|
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.response,
|
|
|
|
&resp, sizeof(resp)))
|
|
|
|
result = -EFAULT;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2005-09-01 23:28:03 +07:00
|
|
|
static ssize_t ib_ucm_init_qp_attr(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
2006-06-18 10:37:27 +07:00
|
|
|
struct ib_uverbs_qp_attr resp;
|
2005-09-01 23:28:03 +07:00
|
|
|
struct ib_ucm_init_qp_attr cmd;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_qp_attr qp_attr;
|
|
|
|
int result = 0;
|
|
|
|
|
|
|
|
if (out_len < sizeof(resp))
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
|
|
|
|
|
|
|
resp.qp_attr_mask = 0;
|
|
|
|
memset(&qp_attr, 0, sizeof qp_attr);
|
|
|
|
qp_attr.qp_state = cmd.qp_state;
|
|
|
|
result = ib_cm_init_qp_attr(ctx->cm_id, &qp_attr, &resp.qp_attr_mask);
|
|
|
|
if (result)
|
|
|
|
goto out;
|
|
|
|
|
2006-06-18 10:37:27 +07:00
|
|
|
ib_copy_qp_attr_to_user(&resp, &qp_attr);
|
2005-09-01 23:28:03 +07:00
|
|
|
|
|
|
|
if (copy_to_user((void __user *)(unsigned long)cmd.response,
|
|
|
|
&resp, sizeof(resp)))
|
|
|
|
result = -EFAULT;
|
|
|
|
|
|
|
|
out:
|
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2006-06-18 10:37:28 +07:00
|
|
|
static int ucm_validate_listen(__be64 service_id, __be64 service_mask)
|
|
|
|
{
|
|
|
|
service_id &= service_mask;
|
|
|
|
|
|
|
|
if (((service_id & IB_CMA_SERVICE_ID_MASK) == IB_CMA_SERVICE_ID) ||
|
|
|
|
((service_id & IB_SDP_SERVICE_ID_MASK) == IB_SDP_SERVICE_ID))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-07-28 01:45:44 +07:00
|
|
|
static ssize_t ib_ucm_listen(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_listen cmd;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-06-18 10:37:28 +07:00
|
|
|
result = ucm_validate_listen(cmd.service_id, cmd.service_mask);
|
|
|
|
if (result)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
result = ib_cm_listen(ctx->cm_id, cmd.service_id, cmd.service_mask,
|
|
|
|
NULL);
|
|
|
|
out:
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2006-11-29 05:57:13 +07:00
|
|
|
static ssize_t ib_ucm_notify(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
2005-07-28 01:45:44 +07:00
|
|
|
{
|
2006-11-29 05:57:13 +07:00
|
|
|
struct ib_ucm_notify cmd;
|
2005-07-28 01:45:44 +07:00
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (IS_ERR(ctx))
|
|
|
|
return PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-11-29 05:57:13 +07:00
|
|
|
result = ib_cm_notify(ctx->cm_id, (enum ib_event_type) cmd.event);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ib_ucm_alloc_data(const void **dest, u64 src, u32 len)
|
|
|
|
{
|
|
|
|
void *data;
|
|
|
|
|
|
|
|
*dest = NULL;
|
|
|
|
|
|
|
|
if (!len)
|
|
|
|
return 0;
|
|
|
|
|
2010-05-22 15:21:27 +07:00
|
|
|
data = memdup_user((void __user *)(unsigned long)src, len);
|
|
|
|
if (IS_ERR(data))
|
|
|
|
return PTR_ERR(data);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
*dest = data;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ib_ucm_path_get(struct ib_sa_path_rec **path, u64 src)
|
|
|
|
{
|
2006-06-18 10:37:27 +07:00
|
|
|
struct ib_user_path_rec upath;
|
2005-07-28 01:45:44 +07:00
|
|
|
struct ib_sa_path_rec *sa_path;
|
|
|
|
|
|
|
|
*path = NULL;
|
|
|
|
|
|
|
|
if (!src)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
sa_path = kmalloc(sizeof(*sa_path), GFP_KERNEL);
|
|
|
|
if (!sa_path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2006-06-18 10:37:27 +07:00
|
|
|
if (copy_from_user(&upath, (void __user *)(unsigned long)src,
|
|
|
|
sizeof(upath))) {
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
kfree(sa_path);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
2006-06-18 10:37:27 +07:00
|
|
|
ib_copy_path_rec_from_user(sa_path, &upath);
|
2005-07-28 01:45:44 +07:00
|
|
|
*path = sa_path;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_req(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_cm_req_param param;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_req cmd;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
param.private_data = NULL;
|
|
|
|
param.primary_path = NULL;
|
|
|
|
param.alternate_path = NULL;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(¶m.private_data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_path_get(¶m.primary_path, cmd.primary_path);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_path_get(¶m.alternate_path, cmd.alternate_path);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
param.private_data_len = cmd.len;
|
|
|
|
param.service_id = cmd.sid;
|
|
|
|
param.qp_num = cmd.qpn;
|
|
|
|
param.qp_type = cmd.qp_type;
|
|
|
|
param.starting_psn = cmd.psn;
|
|
|
|
param.peer_to_peer = cmd.peer_to_peer;
|
|
|
|
param.responder_resources = cmd.responder_resources;
|
|
|
|
param.initiator_depth = cmd.initiator_depth;
|
|
|
|
param.remote_cm_response_timeout = cmd.remote_cm_response_timeout;
|
|
|
|
param.flow_control = cmd.flow_control;
|
|
|
|
param.local_cm_response_timeout = cmd.local_cm_response_timeout;
|
|
|
|
param.retry_count = cmd.retry_count;
|
|
|
|
param.rnr_retry_count = cmd.rnr_retry_count;
|
|
|
|
param.max_cm_retries = cmd.max_cm_retries;
|
|
|
|
param.srq = cmd.srq;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-07-28 01:45:44 +07:00
|
|
|
result = ib_send_cm_req(ctx->cm_id, ¶m);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
done:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(param.private_data);
|
|
|
|
kfree(param.primary_path);
|
|
|
|
kfree(param.alternate_path);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_rep(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_cm_rep_param param;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_rep cmd;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
param.private_data = NULL;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(¶m.private_data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
return result;
|
|
|
|
|
|
|
|
param.qp_num = cmd.qpn;
|
|
|
|
param.starting_psn = cmd.psn;
|
|
|
|
param.private_data_len = cmd.len;
|
|
|
|
param.responder_resources = cmd.responder_resources;
|
|
|
|
param.initiator_depth = cmd.initiator_depth;
|
|
|
|
param.failover_accepted = cmd.failover_accepted;
|
|
|
|
param.flow_control = cmd.flow_control;
|
|
|
|
param.rnr_retry_count = cmd.rnr_retry_count;
|
|
|
|
param.srq = cmd.srq;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-09-01 23:28:03 +07:00
|
|
|
ctx->uid = cmd.uid;
|
2005-07-28 01:45:44 +07:00
|
|
|
result = ib_send_cm_rep(ctx->cm_id, ¶m);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(param.private_data);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_private_data(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf, int in_len,
|
|
|
|
int (*func)(struct ib_cm_id *cm_id,
|
|
|
|
const void *private_data,
|
|
|
|
u8 private_data_len))
|
|
|
|
{
|
|
|
|
struct ib_ucm_private_data cmd;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
const void *private_data = NULL;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(&private_data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
return result;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-07-28 01:45:44 +07:00
|
|
|
result = func(ctx->cm_id, private_data, cmd.len);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(private_data);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_rtu(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
return ib_ucm_send_private_data(file, inbuf, in_len, ib_send_cm_rtu);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_dreq(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
return ib_ucm_send_private_data(file, inbuf, in_len, ib_send_cm_dreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_drep(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
return ib_ucm_send_private_data(file, inbuf, in_len, ib_send_cm_drep);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_info(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf, int in_len,
|
|
|
|
int (*func)(struct ib_cm_id *cm_id,
|
|
|
|
int status,
|
|
|
|
const void *info,
|
|
|
|
u8 info_len,
|
|
|
|
const void *data,
|
|
|
|
u8 data_len))
|
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_info cmd;
|
|
|
|
const void *data = NULL;
|
|
|
|
const void *info = NULL;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(&data, cmd.data, cmd.data_len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(&info, cmd.info, cmd.info_len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
|
|
|
result = func(ctx->cm_id, cmd.status, info, cmd.info_len,
|
2005-07-28 01:45:44 +07:00
|
|
|
data, cmd.data_len);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
done:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(data);
|
|
|
|
kfree(info);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_rej(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
return ib_ucm_send_info(file, inbuf, in_len, (void *)ib_send_cm_rej);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_apr(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
return ib_ucm_send_info(file, inbuf, in_len, (void *)ib_send_cm_apr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_mra(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_mra cmd;
|
|
|
|
const void *data = NULL;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(&data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
return result;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
|
|
|
result = ib_send_cm_mra(ctx->cm_id, cmd.timeout, data, cmd.len);
|
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(data);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_lap(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_sa_path_rec *path = NULL;
|
|
|
|
struct ib_ucm_lap cmd;
|
|
|
|
const void *data = NULL;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(&data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_path_get(&path, cmd.path);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-07-28 01:45:44 +07:00
|
|
|
result = ib_send_cm_lap(ctx->cm_id, path, data, cmd.len);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
done:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(data);
|
|
|
|
kfree(path);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_sidr_req(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_cm_sidr_req_param param;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
struct ib_ucm_sidr_req cmd;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
param.private_data = NULL;
|
|
|
|
param.path = NULL;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(¶m.private_data, cmd.data, cmd.len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_path_get(¶m.path, cmd.path);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
param.private_data_len = cmd.len;
|
|
|
|
param.service_id = cmd.sid;
|
|
|
|
param.timeout_ms = cmd.timeout;
|
|
|
|
param.max_cm_retries = cmd.max_cm_retries;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-07-28 01:45:44 +07:00
|
|
|
result = ib_send_cm_sidr_req(ctx->cm_id, ¶m);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
done:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(param.private_data);
|
|
|
|
kfree(param.path);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_send_sidr_rep(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len)
|
|
|
|
{
|
|
|
|
struct ib_cm_sidr_rep_param param;
|
|
|
|
struct ib_ucm_sidr_rep cmd;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
param.info = NULL;
|
|
|
|
|
|
|
|
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(¶m.private_data,
|
|
|
|
cmd.data, cmd.data_len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
result = ib_ucm_alloc_data(¶m.info, cmd.info, cmd.info_len);
|
|
|
|
if (result)
|
|
|
|
goto done;
|
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
param.qp_num = cmd.qpn;
|
|
|
|
param.qkey = cmd.qkey;
|
|
|
|
param.status = cmd.status;
|
|
|
|
param.info_length = cmd.info_len;
|
|
|
|
param.private_data_len = cmd.data_len;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-08-20 03:46:34 +07:00
|
|
|
ctx = ib_ucm_ctx_get(file, cmd.id);
|
|
|
|
if (!IS_ERR(ctx)) {
|
2005-07-28 01:45:44 +07:00
|
|
|
result = ib_send_cm_sidr_rep(ctx->cm_id, ¶m);
|
2005-08-20 03:46:34 +07:00
|
|
|
ib_ucm_ctx_put(ctx);
|
|
|
|
} else
|
|
|
|
result = PTR_ERR(ctx);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
done:
|
2005-07-28 10:38:56 +07:00
|
|
|
kfree(param.private_data);
|
|
|
|
kfree(param.info);
|
2005-07-28 01:45:44 +07:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t (*ucm_cmd_table[])(struct ib_ucm_file *file,
|
|
|
|
const char __user *inbuf,
|
|
|
|
int in_len, int out_len) = {
|
|
|
|
[IB_USER_CM_CMD_CREATE_ID] = ib_ucm_create_id,
|
|
|
|
[IB_USER_CM_CMD_DESTROY_ID] = ib_ucm_destroy_id,
|
|
|
|
[IB_USER_CM_CMD_ATTR_ID] = ib_ucm_attr_id,
|
|
|
|
[IB_USER_CM_CMD_LISTEN] = ib_ucm_listen,
|
2006-11-29 05:57:13 +07:00
|
|
|
[IB_USER_CM_CMD_NOTIFY] = ib_ucm_notify,
|
2005-07-28 01:45:44 +07:00
|
|
|
[IB_USER_CM_CMD_SEND_REQ] = ib_ucm_send_req,
|
|
|
|
[IB_USER_CM_CMD_SEND_REP] = ib_ucm_send_rep,
|
|
|
|
[IB_USER_CM_CMD_SEND_RTU] = ib_ucm_send_rtu,
|
|
|
|
[IB_USER_CM_CMD_SEND_DREQ] = ib_ucm_send_dreq,
|
|
|
|
[IB_USER_CM_CMD_SEND_DREP] = ib_ucm_send_drep,
|
|
|
|
[IB_USER_CM_CMD_SEND_REJ] = ib_ucm_send_rej,
|
|
|
|
[IB_USER_CM_CMD_SEND_MRA] = ib_ucm_send_mra,
|
|
|
|
[IB_USER_CM_CMD_SEND_LAP] = ib_ucm_send_lap,
|
|
|
|
[IB_USER_CM_CMD_SEND_APR] = ib_ucm_send_apr,
|
|
|
|
[IB_USER_CM_CMD_SEND_SIDR_REQ] = ib_ucm_send_sidr_req,
|
|
|
|
[IB_USER_CM_CMD_SEND_SIDR_REP] = ib_ucm_send_sidr_rep,
|
|
|
|
[IB_USER_CM_CMD_EVENT] = ib_ucm_event,
|
2005-09-01 23:28:03 +07:00
|
|
|
[IB_USER_CM_CMD_INIT_QP_ATTR] = ib_ucm_init_qp_attr,
|
2005-07-28 01:45:44 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
|
|
|
|
size_t len, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ib_ucm_file *file = filp->private_data;
|
|
|
|
struct ib_ucm_cmd_hdr hdr;
|
|
|
|
ssize_t result;
|
|
|
|
|
|
|
|
if (len < sizeof(hdr))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&hdr, buf, sizeof(hdr)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
if (hdr.cmd < 0 || hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (hdr.in + sizeof(hdr) > len)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
result = ucm_cmd_table[hdr.cmd](file, buf + sizeof(hdr),
|
|
|
|
hdr.in, hdr.out);
|
|
|
|
if (!result)
|
|
|
|
result = len;
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int ib_ucm_poll(struct file *filp,
|
|
|
|
struct poll_table_struct *wait)
|
|
|
|
{
|
|
|
|
struct ib_ucm_file *file = filp->private_data;
|
|
|
|
unsigned int mask = 0;
|
|
|
|
|
|
|
|
poll_wait(filp, &file->poll_wait, wait);
|
|
|
|
|
|
|
|
if (!list_empty(&file->events))
|
|
|
|
mask = POLLIN | POLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2008-07-25 10:36:59 +07:00
|
|
|
/*
|
|
|
|
* ib_ucm_open() does not need the BKL:
|
|
|
|
*
|
|
|
|
* - no global state is referred to;
|
|
|
|
* - there is no ioctl method to race against;
|
|
|
|
* - no further module initialization is required for open to work
|
|
|
|
* after the device is registered.
|
|
|
|
*/
|
2005-07-28 01:45:44 +07:00
|
|
|
static int ib_ucm_open(struct inode *inode, struct file *filp)
|
|
|
|
{
|
|
|
|
struct ib_ucm_file *file;
|
|
|
|
|
|
|
|
file = kmalloc(sizeof(*file), GFP_KERNEL);
|
|
|
|
if (!file)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&file->events);
|
|
|
|
INIT_LIST_HEAD(&file->ctxs);
|
|
|
|
init_waitqueue_head(&file->poll_wait);
|
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_init(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
|
|
|
filp->private_data = file;
|
|
|
|
file->filp = filp;
|
2008-02-22 06:13:36 +07:00
|
|
|
file->device = container_of(inode->i_cdev, struct ib_ucm_device, cdev);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2010-04-10 07:13:50 +07:00
|
|
|
return nonseekable_open(inode, filp);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ib_ucm_close(struct inode *inode, struct file *filp)
|
|
|
|
{
|
|
|
|
struct ib_ucm_file *file = filp->private_data;
|
|
|
|
struct ib_ucm_context *ctx;
|
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
while (!list_empty(&file->ctxs)) {
|
|
|
|
ctx = list_entry(file->ctxs.next,
|
|
|
|
struct ib_ucm_context, file_list);
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&file->file_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_lock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
idr_remove(&ctx_id_table, ctx->id);
|
2006-01-14 05:51:39 +07:00
|
|
|
mutex_unlock(&ctx_id_mutex);
|
2005-09-01 23:28:03 +07:00
|
|
|
|
|
|
|
ib_destroy_cm_id(ctx->cm_id);
|
|
|
|
ib_ucm_cleanup_events(ctx);
|
|
|
|
kfree(ctx);
|
|
|
|
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_lock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
2006-05-26 00:03:23 +07:00
|
|
|
mutex_unlock(&file->file_mutex);
|
2005-07-28 01:45:44 +07:00
|
|
|
kfree(file);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
static void ib_ucm_release_dev(struct device *dev)
|
2005-10-18 05:37:43 +07:00
|
|
|
{
|
2008-02-22 06:13:36 +07:00
|
|
|
struct ib_ucm_device *ucm_dev;
|
2005-10-18 05:37:43 +07:00
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
ucm_dev = container_of(dev, struct ib_ucm_device, dev);
|
|
|
|
cdev_del(&ucm_dev->cdev);
|
2010-02-03 02:09:06 +07:00
|
|
|
if (ucm_dev->devnum < IB_UCM_MAX_DEVICES)
|
|
|
|
clear_bit(ucm_dev->devnum, dev_map);
|
|
|
|
else
|
|
|
|
clear_bit(ucm_dev->devnum - IB_UCM_MAX_DEVICES, dev_map);
|
2008-02-22 06:13:36 +07:00
|
|
|
kfree(ucm_dev);
|
2005-10-18 05:37:43 +07:00
|
|
|
}
|
|
|
|
|
2007-02-12 15:55:32 +07:00
|
|
|
static const struct file_operations ucm_fops = {
|
2010-02-03 02:09:11 +07:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = ib_ucm_open,
|
2005-07-28 01:45:44 +07:00
|
|
|
.release = ib_ucm_close,
|
2010-02-03 02:09:11 +07:00
|
|
|
.write = ib_ucm_write,
|
2005-07-28 01:45:44 +07:00
|
|
|
.poll = ib_ucm_poll,
|
2010-04-10 07:13:50 +07:00
|
|
|
.llseek = no_llseek,
|
2005-07-28 01:45:44 +07:00
|
|
|
};
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
static ssize_t show_ibdev(struct device *dev, struct device_attribute *attr,
|
|
|
|
char *buf)
|
2005-10-18 05:37:43 +07:00
|
|
|
{
|
2008-02-22 06:13:36 +07:00
|
|
|
struct ib_ucm_device *ucm_dev;
|
2006-09-23 05:22:46 +07:00
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
ucm_dev = container_of(dev, struct ib_ucm_device, dev);
|
|
|
|
return sprintf(buf, "%s\n", ucm_dev->ib_dev->name);
|
2005-10-18 05:37:43 +07:00
|
|
|
}
|
2008-02-22 06:13:36 +07:00
|
|
|
static DEVICE_ATTR(ibdev, S_IRUGO, show_ibdev, NULL);
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2010-02-03 02:09:06 +07:00
|
|
|
static dev_t overflow_maj;
|
|
|
|
static DECLARE_BITMAP(overflow_map, IB_UCM_MAX_DEVICES);
|
|
|
|
static int find_overflow_devnum(void)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!overflow_maj) {
|
|
|
|
ret = alloc_chrdev_region(&overflow_maj, 0, IB_UCM_MAX_DEVICES,
|
|
|
|
"infiniband_cm");
|
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "ucm: couldn't register dynamic device number\n");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = find_first_zero_bit(overflow_map, IB_UCM_MAX_DEVICES);
|
|
|
|
if (ret >= IB_UCM_MAX_DEVICES)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
static void ib_ucm_add_one(struct ib_device *device)
|
2005-07-28 01:45:44 +07:00
|
|
|
{
|
2010-02-03 02:08:55 +07:00
|
|
|
int devnum;
|
2010-02-03 02:09:00 +07:00
|
|
|
dev_t base;
|
2005-10-18 05:37:43 +07:00
|
|
|
struct ib_ucm_device *ucm_dev;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2006-08-04 04:02:42 +07:00
|
|
|
if (!device->alloc_ucontext ||
|
|
|
|
rdma_node_get_transport(device->node_type) != RDMA_TRANSPORT_IB)
|
2005-10-18 05:37:43 +07:00
|
|
|
return;
|
|
|
|
|
2005-11-02 22:23:14 +07:00
|
|
|
ucm_dev = kzalloc(sizeof *ucm_dev, GFP_KERNEL);
|
2005-10-18 05:37:43 +07:00
|
|
|
if (!ucm_dev)
|
|
|
|
return;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
ucm_dev->ib_dev = device;
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2010-02-03 02:08:55 +07:00
|
|
|
devnum = find_first_zero_bit(dev_map, IB_UCM_MAX_DEVICES);
|
2010-02-03 02:09:06 +07:00
|
|
|
if (devnum >= IB_UCM_MAX_DEVICES) {
|
|
|
|
devnum = find_overflow_devnum();
|
|
|
|
if (devnum < 0)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
ucm_dev->devnum = devnum + IB_UCM_MAX_DEVICES;
|
|
|
|
base = devnum + overflow_maj;
|
|
|
|
set_bit(devnum, overflow_map);
|
|
|
|
} else {
|
|
|
|
ucm_dev->devnum = devnum;
|
|
|
|
base = devnum + IB_UCM_BASE_DEV;
|
|
|
|
set_bit(devnum, dev_map);
|
|
|
|
}
|
2005-10-18 05:37:43 +07:00
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
cdev_init(&ucm_dev->cdev, &ucm_fops);
|
|
|
|
ucm_dev->cdev.owner = THIS_MODULE;
|
|
|
|
kobject_set_name(&ucm_dev->cdev.kobj, "ucm%d", ucm_dev->devnum);
|
2010-02-03 02:09:00 +07:00
|
|
|
if (cdev_add(&ucm_dev->cdev, base, 1))
|
2005-10-18 05:37:43 +07:00
|
|
|
goto err;
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
ucm_dev->dev.class = &cm_class;
|
|
|
|
ucm_dev->dev.parent = device->dma_device;
|
|
|
|
ucm_dev->dev.devt = ucm_dev->cdev.dev;
|
|
|
|
ucm_dev->dev.release = ib_ucm_release_dev;
|
2009-01-07 01:44:39 +07:00
|
|
|
dev_set_name(&ucm_dev->dev, "ucm%d", ucm_dev->devnum);
|
2008-02-22 06:13:36 +07:00
|
|
|
if (device_register(&ucm_dev->dev))
|
2005-07-28 01:45:44 +07:00
|
|
|
goto err_cdev;
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
if (device_create_file(&ucm_dev->dev, &dev_attr_ibdev))
|
|
|
|
goto err_dev;
|
2005-10-18 05:37:43 +07:00
|
|
|
|
|
|
|
ib_set_client_data(device, &ucm_client, ucm_dev);
|
|
|
|
return;
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
err_dev:
|
|
|
|
device_unregister(&ucm_dev->dev);
|
2005-10-18 05:37:43 +07:00
|
|
|
err_cdev:
|
2008-02-22 06:13:36 +07:00
|
|
|
cdev_del(&ucm_dev->cdev);
|
2010-02-03 02:09:06 +07:00
|
|
|
if (ucm_dev->devnum < IB_UCM_MAX_DEVICES)
|
|
|
|
clear_bit(devnum, dev_map);
|
|
|
|
else
|
|
|
|
clear_bit(devnum, overflow_map);
|
2005-10-18 05:37:43 +07:00
|
|
|
err:
|
|
|
|
kfree(ucm_dev);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ib_ucm_remove_one(struct ib_device *device)
|
|
|
|
{
|
|
|
|
struct ib_ucm_device *ucm_dev = ib_get_client_data(device, &ucm_client);
|
|
|
|
|
|
|
|
if (!ucm_dev)
|
|
|
|
return;
|
|
|
|
|
2008-02-22 06:13:36 +07:00
|
|
|
device_unregister(&ucm_dev->dev);
|
2005-10-18 05:37:43 +07:00
|
|
|
}
|
|
|
|
|
2010-01-05 18:48:09 +07:00
|
|
|
static CLASS_ATTR_STRING(abi_version, S_IRUGO,
|
|
|
|
__stringify(IB_USER_CM_ABI_VERSION));
|
2005-10-18 05:37:43 +07:00
|
|
|
|
|
|
|
static int __init ib_ucm_init(void)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = register_chrdev_region(IB_UCM_BASE_DEV, IB_UCM_MAX_DEVICES,
|
|
|
|
"infiniband_cm");
|
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "ucm: couldn't register device number\n");
|
2007-07-17 11:49:35 +07:00
|
|
|
goto error1;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
2010-01-05 18:48:09 +07:00
|
|
|
ret = class_create_file(&cm_class, &class_attr_abi_version.attr);
|
2005-10-18 05:37:43 +07:00
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "ucm: couldn't create abi_version attribute\n");
|
2007-07-17 11:49:35 +07:00
|
|
|
goto error2;
|
2005-10-18 05:37:43 +07:00
|
|
|
}
|
2005-07-28 01:45:44 +07:00
|
|
|
|
2005-10-18 05:37:43 +07:00
|
|
|
ret = ib_register_client(&ucm_client);
|
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "ucm: couldn't register client\n");
|
2007-07-17 11:49:35 +07:00
|
|
|
goto error3;
|
2005-10-18 05:37:43 +07:00
|
|
|
}
|
2005-07-28 01:45:44 +07:00
|
|
|
return 0;
|
2005-10-18 05:37:43 +07:00
|
|
|
|
2007-07-17 11:49:35 +07:00
|
|
|
error3:
|
2010-01-05 18:48:09 +07:00
|
|
|
class_remove_file(&cm_class, &class_attr_abi_version.attr);
|
2007-07-17 11:49:35 +07:00
|
|
|
error2:
|
2005-10-18 05:37:43 +07:00
|
|
|
unregister_chrdev_region(IB_UCM_BASE_DEV, IB_UCM_MAX_DEVICES);
|
2007-07-17 11:49:35 +07:00
|
|
|
error1:
|
2005-10-18 05:37:43 +07:00
|
|
|
return ret;
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void __exit ib_ucm_cleanup(void)
|
|
|
|
{
|
2005-10-18 05:37:43 +07:00
|
|
|
ib_unregister_client(&ucm_client);
|
2010-01-05 18:48:09 +07:00
|
|
|
class_remove_file(&cm_class, &class_attr_abi_version.attr);
|
2005-10-18 05:37:43 +07:00
|
|
|
unregister_chrdev_region(IB_UCM_BASE_DEV, IB_UCM_MAX_DEVICES);
|
2010-02-03 02:09:06 +07:00
|
|
|
if (overflow_maj)
|
|
|
|
unregister_chrdev_region(overflow_maj, IB_UCM_MAX_DEVICES);
|
2005-10-25 00:53:25 +07:00
|
|
|
idr_destroy(&ctx_id_table);
|
2005-07-28 01:45:44 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(ib_ucm_init);
|
|
|
|
module_exit(ib_ucm_cleanup);
|