2015-07-31 02:17:43 +07:00
|
|
|
/*
|
2017-04-13 10:29:29 +07:00
|
|
|
* Copyright(c) 2015-2017 Intel Corporation.
|
2015-07-31 02:17:43 +07:00
|
|
|
*
|
|
|
|
* This file is provided under a dual BSD/GPLv2 license. When using or
|
|
|
|
* redistributing this file, you may do so under either license.
|
|
|
|
*
|
|
|
|
* GPL LICENSE SUMMARY
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of version 2 of the GNU General Public License as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful, but
|
|
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* BSD LICENSE
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* - Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in
|
|
|
|
* the documentation and/or other materials provided with the
|
|
|
|
* distribution.
|
|
|
|
* - Neither the name of Intel Corporation nor the names of its
|
|
|
|
* contributors may be used to endorse or promote products derived
|
|
|
|
* from this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
|
|
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
|
|
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
|
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
|
|
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
#include <linux/poll.h>
|
|
|
|
#include <linux/cdev.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/io.h>
|
2017-02-09 00:51:29 +07:00
|
|
|
#include <linux/sched/mm.h>
|
2017-05-04 19:15:15 +07:00
|
|
|
#include <linux/bitmap.h>
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2016-04-11 08:13:13 +07:00
|
|
|
#include <rdma/ib.h>
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
#include "hfi.h"
|
|
|
|
#include "pio.h"
|
|
|
|
#include "device.h"
|
|
|
|
#include "common.h"
|
|
|
|
#include "trace.h"
|
2017-08-22 08:27:23 +07:00
|
|
|
#include "mmu_rb.h"
|
2015-07-31 02:17:43 +07:00
|
|
|
#include "user_sdma.h"
|
2015-10-31 05:58:43 +07:00
|
|
|
#include "user_exp_rcv.h"
|
staging/rdma/hfi1: Add support for enabling/disabling PCIe ASPM
hfi1 HW has a high PCIe ASPM L1 exit latency and also advertises an
acceptable latency less than actual ASPM latencies. Additional
mechanisms than those provided by BIOS/OS are therefore required to
enable/disable ASPM for hfi1 to provide acceptable power/performance
trade offs. This patch adds this support.
By means of a module parameter ASPM can be either (a) always enabled
(power save mode) (b) always disabled (performance mode) (c)
enabled/disabled dynamically. The dynamic mode implements two
heuristics to alleviate possible problems with high ASPM L1 exit
latency. ASPM is normally enabled but is disabled if (a) there are any
active user space PSM contexts, or (b) for verbs, ASPM is disabled as
interrupt activity for a context starts to increase.
A few more points about the verbs implementation. In order to reduce
lock/cache contention between multiple verbs contexts, some processing
is done at the context layer before contending for device layer
locks. ASPM is disabled when two interrupts for a context happen
within 1 millisec. A timer is scheduled which will re-enable ASPM
after 1 second should the interrupt activity cease. Normally, every
interrupt, or interrupt-pair should push the timer out
further. However, since this might increase the processing load per
interrupt, pushing the timer out is postponed for half a second. If
after half a second we get two interrupts within 1 millisec the timer
is pushed out by another second.
Finally, the kernel ASPM API is not used in this patch. This is
because this patch does several non-standard things as SW workarounds
for HW issues. As mentioned above, it enables ASPM even when advertised
actual latencies are greater than acceptable latencies. Also, whereas
the kernel API only allows drivers to disable ASPM from driver probe,
this patch enables/disables ASPM directly from interrupt context. Due
to these reasons the kernel ASPM API was not used.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dean Luick <dean.luick@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-02-04 05:33:06 +07:00
|
|
|
#include "aspm.h"
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
#undef pr_fmt
|
|
|
|
#define pr_fmt(fmt) DRIVER_NAME ": " fmt
|
|
|
|
|
|
|
|
#define SEND_CTXT_HALT_TIMEOUT 1000 /* msecs */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* File operation functions
|
|
|
|
*/
|
2017-05-04 19:14:39 +07:00
|
|
|
static int hfi1_file_open(struct inode *inode, struct file *fp);
|
|
|
|
static int hfi1_file_close(struct inode *inode, struct file *fp);
|
|
|
|
static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from);
|
2017-07-03 17:39:46 +07:00
|
|
|
static __poll_t hfi1_poll(struct file *fp, struct poll_table_struct *pt);
|
2017-05-04 19:14:39 +07:00
|
|
|
static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-05-04 19:14:39 +07:00
|
|
|
static u64 kvirt_to_phys(void *addr);
|
2017-09-26 21:03:50 +07:00
|
|
|
static int assign_ctxt(struct hfi1_filedata *fd, unsigned long arg, u32 len);
|
2017-08-05 03:52:38 +07:00
|
|
|
static void init_subctxts(struct hfi1_ctxtdata *uctxt,
|
|
|
|
const struct hfi1_user_info *uinfo);
|
2017-07-29 22:43:32 +07:00
|
|
|
static int init_user_ctxt(struct hfi1_filedata *fd,
|
|
|
|
struct hfi1_ctxtdata *uctxt);
|
2017-05-04 19:15:21 +07:00
|
|
|
static void user_init(struct hfi1_ctxtdata *uctxt);
|
2017-09-26 21:03:57 +07:00
|
|
|
static int get_ctxt_info(struct hfi1_filedata *fd, unsigned long arg, u32 len);
|
2017-09-26 21:04:10 +07:00
|
|
|
static int get_base_info(struct hfi1_filedata *fd, unsigned long arg, u32 len);
|
2017-09-26 21:04:16 +07:00
|
|
|
static int user_exp_rcv_setup(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len);
|
2017-09-26 21:04:22 +07:00
|
|
|
static int user_exp_rcv_clear(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len);
|
2017-09-26 21:04:29 +07:00
|
|
|
static int user_exp_rcv_invalid(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len);
|
2017-07-29 22:43:32 +07:00
|
|
|
static int setup_base_ctxt(struct hfi1_filedata *fd,
|
|
|
|
struct hfi1_ctxtdata *uctxt);
|
2017-05-04 19:14:39 +07:00
|
|
|
static int setup_subctxt(struct hfi1_ctxtdata *uctxt);
|
2017-05-04 19:14:57 +07:00
|
|
|
|
2017-05-04 19:15:09 +07:00
|
|
|
static int find_sub_ctxt(struct hfi1_filedata *fd,
|
|
|
|
const struct hfi1_user_info *uinfo);
|
2017-05-04 19:14:45 +07:00
|
|
|
static int allocate_ctxt(struct hfi1_filedata *fd, struct hfi1_devdata *dd,
|
2017-07-29 22:43:32 +07:00
|
|
|
struct hfi1_user_info *uinfo,
|
|
|
|
struct hfi1_ctxtdata **cd);
|
2017-07-24 21:45:43 +07:00
|
|
|
static void deallocate_ctxt(struct hfi1_ctxtdata *uctxt);
|
2017-07-03 17:39:46 +07:00
|
|
|
static __poll_t poll_urgent(struct file *fp, struct poll_table_struct *pt);
|
|
|
|
static __poll_t poll_next(struct file *fp, struct poll_table_struct *pt);
|
2017-05-04 19:15:15 +07:00
|
|
|
static int user_event_ack(struct hfi1_ctxtdata *uctxt, u16 subctxt,
|
2017-09-26 21:04:35 +07:00
|
|
|
unsigned long arg);
|
|
|
|
static int set_ctxt_pkey(struct hfi1_ctxtdata *uctxt, unsigned long arg);
|
2017-09-26 21:04:42 +07:00
|
|
|
static int ctxt_reset(struct hfi1_ctxtdata *uctxt);
|
2017-05-04 19:15:15 +07:00
|
|
|
static int manage_rcvq(struct hfi1_ctxtdata *uctxt, u16 subctxt,
|
2017-09-26 21:04:35 +07:00
|
|
|
unsigned long arg);
|
2018-04-17 21:23:58 +07:00
|
|
|
static vm_fault_t vma_fault(struct vm_fault *vmf);
|
2016-05-19 19:26:24 +07:00
|
|
|
static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,
|
|
|
|
unsigned long arg);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
static const struct file_operations hfi1_file_ops = {
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.write_iter = hfi1_write_iter,
|
|
|
|
.open = hfi1_file_open,
|
|
|
|
.release = hfi1_file_close,
|
2016-05-19 19:26:24 +07:00
|
|
|
.unlocked_ioctl = hfi1_file_ioctl,
|
2015-07-31 02:17:43 +07:00
|
|
|
.poll = hfi1_poll,
|
|
|
|
.mmap = hfi1_file_mmap,
|
|
|
|
.llseek = noop_llseek,
|
|
|
|
};
|
|
|
|
|
2017-08-28 11:29:28 +07:00
|
|
|
static const struct vm_operations_struct vm_ops = {
|
2015-07-31 02:17:43 +07:00
|
|
|
.fault = vma_fault,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Types of memories mapped into user processes' space
|
|
|
|
*/
|
|
|
|
enum mmap_types {
|
|
|
|
PIO_BUFS = 1,
|
|
|
|
PIO_BUFS_SOP,
|
|
|
|
PIO_CRED,
|
|
|
|
RCV_HDRQ,
|
|
|
|
RCV_EGRBUF,
|
|
|
|
UREGS,
|
|
|
|
EVENTS,
|
|
|
|
STATUS,
|
|
|
|
RTAIL,
|
|
|
|
SUBCTXT_UREGS,
|
|
|
|
SUBCTXT_RCV_HDRQ,
|
|
|
|
SUBCTXT_EGRBUF,
|
|
|
|
SDMA_COMP
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Masks and offsets defining the mmap tokens
|
|
|
|
*/
|
|
|
|
#define HFI1_MMAP_OFFSET_MASK 0xfffULL
|
|
|
|
#define HFI1_MMAP_OFFSET_SHIFT 0
|
|
|
|
#define HFI1_MMAP_SUBCTXT_MASK 0xfULL
|
|
|
|
#define HFI1_MMAP_SUBCTXT_SHIFT 12
|
|
|
|
#define HFI1_MMAP_CTXT_MASK 0xffULL
|
|
|
|
#define HFI1_MMAP_CTXT_SHIFT 16
|
|
|
|
#define HFI1_MMAP_TYPE_MASK 0xfULL
|
|
|
|
#define HFI1_MMAP_TYPE_SHIFT 24
|
|
|
|
#define HFI1_MMAP_MAGIC_MASK 0xffffffffULL
|
|
|
|
#define HFI1_MMAP_MAGIC_SHIFT 32
|
|
|
|
|
|
|
|
#define HFI1_MMAP_MAGIC 0xdabbad00
|
|
|
|
|
|
|
|
#define HFI1_MMAP_TOKEN_SET(field, val) \
|
|
|
|
(((val) & HFI1_MMAP_##field##_MASK) << HFI1_MMAP_##field##_SHIFT)
|
|
|
|
#define HFI1_MMAP_TOKEN_GET(field, token) \
|
|
|
|
(((token) >> HFI1_MMAP_##field##_SHIFT) & HFI1_MMAP_##field##_MASK)
|
|
|
|
#define HFI1_MMAP_TOKEN(type, ctxt, subctxt, addr) \
|
|
|
|
(HFI1_MMAP_TOKEN_SET(MAGIC, HFI1_MMAP_MAGIC) | \
|
|
|
|
HFI1_MMAP_TOKEN_SET(TYPE, type) | \
|
|
|
|
HFI1_MMAP_TOKEN_SET(CTXT, ctxt) | \
|
|
|
|
HFI1_MMAP_TOKEN_SET(SUBCTXT, subctxt) | \
|
2015-10-03 09:34:59 +07:00
|
|
|
HFI1_MMAP_TOKEN_SET(OFFSET, (offset_in_page(addr))))
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
#define dbg(fmt, ...) \
|
|
|
|
pr_info(fmt, ##__VA_ARGS__)
|
|
|
|
|
|
|
|
static inline int is_valid_mmap(u64 token)
|
|
|
|
{
|
|
|
|
return (HFI1_MMAP_TOKEN_GET(MAGIC, token) == HFI1_MMAP_MAGIC);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hfi1_file_open(struct inode *inode, struct file *fp)
|
|
|
|
{
|
2016-07-28 23:27:35 +07:00
|
|
|
struct hfi1_filedata *fd;
|
2016-05-19 19:26:44 +07:00
|
|
|
struct hfi1_devdata *dd = container_of(inode->i_cdev,
|
|
|
|
struct hfi1_devdata,
|
|
|
|
user_cdev);
|
|
|
|
|
IB/hfi1: Fix bar0 mapping to use write combining
When the debugpat kernel boot flag is turned on the following
traces are printed:
[ 1884.793168] x86/PAT: Overlap at 0x90000000-0x92000000
[ 1884.803510] x86/PAT: reserve_memtype added [mem 0x91200000-0x9127ffff],
track uncached-minus, req write-combining, ret uncached-minus
[ 1884.818167] hfi1 0000:05:00.0: hfi1_0: WC Remapped RcvArray:
ffffc9000a980000
The ioremap_wc() clearly is not returning a write combining mapping due
to an overlap where the RcvArray is mapped in a uncached mapping prior
to creating the proposed write combining mapping.
The patch replaces the single base register for uncached CSRs that
used to overlap the RcvArray with two mappings. One, kregbase1, from the
bar0 up to the RcvArray and another, kregbase2, from the end of the
RcvArray to the pio send buffer space. A new dd field, base2_start,
is used to convert the zero-based offset in the CSR routines to the
correct kregbase1/kregbase2 mapping. A single direct write of the
RcvArray CSRs is replaced with hfi1_put_tid() to insure correct access
using the new disjoint mapping.
Additionally, the kregend field is deleted since it is only ever written.
patdebug now shows the RcvArray as write combining:
[ 35.688990] x86/PAT: reserve_memtype added [mem 0x91200000-0x9127ffff],
track write-combining, req write-combining, ret write-combining
To insulate from any potential issues with write combining, all
writeq are now flushed in hfi1_put_tid() and rcv_array_wc_fill().
Reviewed-by: Mitko Haralanov <mitko.haralanov@intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-07-24 21:45:31 +07:00
|
|
|
if (!((dd->flags & HFI1_PRESENT) && dd->kregbase1))
|
2017-05-04 19:14:57 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2016-10-25 22:57:55 +07:00
|
|
|
if (!atomic_inc_not_zero(&dd->user_refcount))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/* The real work is performed later in assign_ctxt() */
|
2016-07-28 23:27:35 +07:00
|
|
|
|
|
|
|
fd = kzalloc(sizeof(*fd), GFP_KERNEL);
|
|
|
|
|
2016-07-29 02:21:19 +07:00
|
|
|
if (fd) {
|
|
|
|
fd->rec_cpu_num = -1; /* no cpu affinity by default */
|
|
|
|
fd->mm = current->mm;
|
2017-02-28 05:30:07 +07:00
|
|
|
mmgrab(fd->mm);
|
2017-05-04 19:14:57 +07:00
|
|
|
fd->dd = dd;
|
2018-02-02 01:43:58 +07:00
|
|
|
kobject_get(&fd->dd->kobj);
|
2016-10-25 22:57:55 +07:00
|
|
|
fp->private_data = fd;
|
|
|
|
} else {
|
|
|
|
fp->private_data = NULL;
|
|
|
|
|
|
|
|
if (atomic_dec_and_test(&dd->user_refcount))
|
|
|
|
complete(&dd->user_comp);
|
2016-07-28 23:27:35 +07:00
|
|
|
|
2016-10-25 22:57:55 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2016-07-28 23:27:35 +07:00
|
|
|
|
2016-10-25 22:57:55 +07:00
|
|
|
return 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2016-05-19 19:26:24 +07:00
|
|
|
static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,
|
|
|
|
unsigned long arg)
|
|
|
|
{
|
|
|
|
struct hfi1_filedata *fd = fp->private_data;
|
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
|
|
|
int ret = 0;
|
|
|
|
int uval = 0;
|
|
|
|
|
2016-05-19 19:26:37 +07:00
|
|
|
hfi1_cdbg(IOCTL, "IOCTL recv: 0x%x", cmd);
|
2016-05-19 19:26:24 +07:00
|
|
|
if (cmd != HFI1_IOCTL_ASSIGN_CTXT &&
|
|
|
|
cmd != HFI1_IOCTL_GET_VERS &&
|
|
|
|
!uctxt)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case HFI1_IOCTL_ASSIGN_CTXT:
|
2017-09-26 21:03:50 +07:00
|
|
|
ret = assign_ctxt(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
2017-09-26 21:03:50 +07:00
|
|
|
|
2016-05-19 19:26:24 +07:00
|
|
|
case HFI1_IOCTL_CTXT_INFO:
|
2017-09-26 21:03:57 +07:00
|
|
|
ret = get_ctxt_info(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
2017-09-26 21:03:57 +07:00
|
|
|
|
2016-05-19 19:26:24 +07:00
|
|
|
case HFI1_IOCTL_USER_INFO:
|
2017-09-26 21:04:10 +07:00
|
|
|
ret = get_base_info(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
2017-09-26 21:04:10 +07:00
|
|
|
|
2016-05-19 19:26:24 +07:00
|
|
|
case HFI1_IOCTL_CREDIT_UPD:
|
2016-07-23 13:30:52 +07:00
|
|
|
if (uctxt)
|
2016-05-19 19:26:24 +07:00
|
|
|
sc_return_credits(uctxt->sc);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_TID_UPDATE:
|
2017-09-26 21:04:16 +07:00
|
|
|
ret = user_exp_rcv_setup(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_TID_FREE:
|
2017-09-26 21:04:22 +07:00
|
|
|
ret = user_exp_rcv_clear(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_TID_INVAL_READ:
|
2017-09-26 21:04:29 +07:00
|
|
|
ret = user_exp_rcv_invalid(fd, arg, _IOC_SIZE(cmd));
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_RECV_CTRL:
|
2017-09-26 21:04:35 +07:00
|
|
|
ret = manage_rcvq(uctxt, fd->subctxt, arg);
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_POLL_TYPE:
|
2017-09-26 21:04:35 +07:00
|
|
|
if (get_user(uval, (int __user *)arg))
|
2016-05-19 19:26:24 +07:00
|
|
|
return -EFAULT;
|
|
|
|
uctxt->poll_type = (typeof(uctxt->poll_type))uval;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_ACK_EVENT:
|
2017-09-26 21:04:35 +07:00
|
|
|
ret = user_event_ack(uctxt, fd->subctxt, arg);
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_SET_PKEY:
|
2017-09-26 21:04:35 +07:00
|
|
|
ret = set_ctxt_pkey(uctxt, arg);
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
2017-09-26 21:04:42 +07:00
|
|
|
case HFI1_IOCTL_CTXT_RESET:
|
|
|
|
ret = ctxt_reset(uctxt);
|
2016-05-19 19:26:24 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case HFI1_IOCTL_GET_VERS:
|
|
|
|
uval = HFI1_USER_SWVERSION;
|
|
|
|
if (put_user(uval, (int __user *)arg))
|
|
|
|
return -EFAULT;
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
|
|
|
|
{
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_filedata *fd = kiocb->ki_filp->private_data;
|
|
|
|
struct hfi1_user_sdma_pkt_q *pq = fd->pq;
|
|
|
|
struct hfi1_user_sdma_comp_q *cq = fd->cq;
|
2016-07-02 06:00:55 +07:00
|
|
|
int done = 0, reqs = 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
unsigned long dim = from->nr_segs;
|
|
|
|
|
2016-07-02 06:00:55 +07:00
|
|
|
if (!cq || !pq)
|
|
|
|
return -EIO;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2016-07-02 06:00:55 +07:00
|
|
|
if (!iter_is_iovec(from) || !dim)
|
|
|
|
return -EINVAL;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-08-29 01:23:27 +07:00
|
|
|
trace_hfi1_sdma_request(fd->dd, fd->uctxt->ctxt, fd->subctxt, dim);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2016-07-02 06:00:55 +07:00
|
|
|
if (atomic_read(&pq->n_reqs) == pq->n_max_reqs)
|
|
|
|
return -ENOSPC;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
while (dim) {
|
2016-07-02 06:00:55 +07:00
|
|
|
int ret;
|
2015-07-31 02:17:43 +07:00
|
|
|
unsigned long count = 0;
|
|
|
|
|
|
|
|
ret = hfi1_user_sdma_process_request(
|
2017-05-04 19:14:45 +07:00
|
|
|
fd, (struct iovec *)(from->iov + done),
|
2015-07-31 02:17:43 +07:00
|
|
|
dim, &count);
|
2016-07-02 06:00:55 +07:00
|
|
|
if (ret) {
|
|
|
|
reqs = ret;
|
|
|
|
break;
|
|
|
|
}
|
2015-07-31 02:17:43 +07:00
|
|
|
dim -= count;
|
|
|
|
done += count;
|
|
|
|
reqs++;
|
|
|
|
}
|
2016-07-02 06:00:55 +07:00
|
|
|
|
|
|
|
return reqs;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
|
|
|
|
{
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_filedata *fd = fp->private_data;
|
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
struct hfi1_devdata *dd;
|
2016-09-06 18:35:54 +07:00
|
|
|
unsigned long flags;
|
2015-07-31 02:17:43 +07:00
|
|
|
u64 token = vma->vm_pgoff << PAGE_SHIFT,
|
|
|
|
memaddr = 0;
|
2016-09-06 18:35:54 +07:00
|
|
|
void *memvirt = NULL;
|
2015-07-31 02:17:43 +07:00
|
|
|
u8 subctxt, mapio = 0, vmf = 0, type;
|
|
|
|
ssize_t memlen = 0;
|
|
|
|
int ret = 0;
|
|
|
|
u16 ctxt;
|
|
|
|
|
|
|
|
if (!is_valid_mmap(token) || !uctxt ||
|
|
|
|
!(vma->vm_flags & VM_SHARED)) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
dd = uctxt->dd;
|
|
|
|
ctxt = HFI1_MMAP_TOKEN_GET(CTXT, token);
|
|
|
|
subctxt = HFI1_MMAP_TOKEN_GET(SUBCTXT, token);
|
|
|
|
type = HFI1_MMAP_TOKEN_GET(TYPE, token);
|
2015-10-31 05:58:40 +07:00
|
|
|
if (ctxt != uctxt->ctxt || subctxt != fd->subctxt) {
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
flags = vma->vm_flags;
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case PIO_BUFS:
|
|
|
|
case PIO_BUFS_SOP:
|
|
|
|
memaddr = ((dd->physaddr + TXE_PIO_SEND) +
|
|
|
|
/* chip pio base */
|
2015-10-16 23:39:08 +07:00
|
|
|
(uctxt->sc->hw_context * BIT(16))) +
|
2015-07-31 02:17:43 +07:00
|
|
|
/* 64K PIO space / ctxt */
|
|
|
|
(type == PIO_BUFS_SOP ?
|
|
|
|
(TXE_PIO_SIZE / 2) : 0); /* sop? */
|
|
|
|
/*
|
|
|
|
* Map only the amount allocated to the context, not the
|
|
|
|
* entire available context's PIO space.
|
|
|
|
*/
|
2016-03-05 00:15:00 +07:00
|
|
|
memlen = PAGE_ALIGN(uctxt->sc->credits * PIO_BLOCK_SIZE);
|
2015-07-31 02:17:43 +07:00
|
|
|
flags &= ~VM_MAYREAD;
|
|
|
|
flags |= VM_DONTCOPY | VM_DONTEXPAND;
|
|
|
|
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
|
|
|
|
mapio = 1;
|
|
|
|
break;
|
|
|
|
case PIO_CRED:
|
|
|
|
if (flags & VM_WRITE) {
|
|
|
|
ret = -EPERM;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* The credit return location for this context could be on the
|
|
|
|
* second or third page allocated for credit returns (if number
|
|
|
|
* of enabled contexts > 64 and 128 respectively).
|
|
|
|
*/
|
2016-09-06 18:35:54 +07:00
|
|
|
memvirt = dd->cr_base[uctxt->numa_id].va;
|
|
|
|
memaddr = virt_to_phys(memvirt) +
|
2015-07-31 02:17:43 +07:00
|
|
|
(((u64)uctxt->sc->hw_free -
|
|
|
|
(u64)dd->cr_base[uctxt->numa_id].va) & PAGE_MASK);
|
|
|
|
memlen = PAGE_SIZE;
|
|
|
|
flags &= ~VM_MAYWRITE;
|
|
|
|
flags |= VM_DONTCOPY | VM_DONTEXPAND;
|
|
|
|
/*
|
|
|
|
* The driver has already allocated memory for credit
|
|
|
|
* returns and programmed it into the chip. Has that
|
|
|
|
* memory been flagged as non-cached?
|
|
|
|
*/
|
|
|
|
/* vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); */
|
|
|
|
mapio = 1;
|
|
|
|
break;
|
|
|
|
case RCV_HDRQ:
|
|
|
|
memlen = uctxt->rcvhdrq_size;
|
2016-09-06 18:35:54 +07:00
|
|
|
memvirt = uctxt->rcvhdrq;
|
2015-07-31 02:17:43 +07:00
|
|
|
break;
|
|
|
|
case RCV_EGRBUF: {
|
|
|
|
unsigned long addr;
|
|
|
|
int i;
|
|
|
|
/*
|
|
|
|
* The RcvEgr buffer need to be handled differently
|
|
|
|
* as multiple non-contiguous pages need to be mapped
|
|
|
|
* into the user process.
|
|
|
|
*/
|
|
|
|
memlen = uctxt->egrbufs.size;
|
|
|
|
if ((vma->vm_end - vma->vm_start) != memlen) {
|
|
|
|
dd_dev_err(dd, "Eager buffer map size invalid (%lu != %lu)\n",
|
|
|
|
(vma->vm_end - vma->vm_start), memlen);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
if (vma->vm_flags & VM_WRITE) {
|
|
|
|
ret = -EPERM;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
vma->vm_flags &= ~VM_MAYWRITE;
|
|
|
|
addr = vma->vm_start;
|
|
|
|
for (i = 0 ; i < uctxt->egrbufs.numbufs; i++) {
|
2016-09-06 18:35:54 +07:00
|
|
|
memlen = uctxt->egrbufs.buffers[i].len;
|
|
|
|
memvirt = uctxt->egrbufs.buffers[i].addr;
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = remap_pfn_range(
|
|
|
|
vma, addr,
|
2016-09-06 18:35:54 +07:00
|
|
|
/*
|
|
|
|
* virt_to_pfn() does the same, but
|
|
|
|
* it's not available on x86_64
|
|
|
|
* when CONFIG_MMU is enabled.
|
|
|
|
*/
|
|
|
|
PFN_DOWN(__pa(memvirt)),
|
|
|
|
memlen,
|
2015-07-31 02:17:43 +07:00
|
|
|
vma->vm_page_prot);
|
|
|
|
if (ret < 0)
|
|
|
|
goto done;
|
2016-09-06 18:35:54 +07:00
|
|
|
addr += memlen;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
ret = 0;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
case UREGS:
|
|
|
|
/*
|
|
|
|
* Map only the page that contains this context's user
|
|
|
|
* registers.
|
|
|
|
*/
|
|
|
|
memaddr = (unsigned long)
|
|
|
|
(dd->physaddr + RXE_PER_CONTEXT_USER)
|
|
|
|
+ (uctxt->ctxt * RXE_PER_CONTEXT_SIZE);
|
|
|
|
/*
|
|
|
|
* TidFlow table is on the same page as the rest of the
|
|
|
|
* user registers.
|
|
|
|
*/
|
|
|
|
memlen = PAGE_SIZE;
|
|
|
|
flags |= VM_DONTCOPY | VM_DONTEXPAND;
|
|
|
|
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
|
|
|
|
mapio = 1;
|
|
|
|
break;
|
|
|
|
case EVENTS:
|
|
|
|
/*
|
|
|
|
* Use the page where this context's flags are. User level
|
|
|
|
* knows where it's own bitmap is within the page.
|
|
|
|
*/
|
2017-09-26 21:00:56 +07:00
|
|
|
memaddr = (unsigned long)
|
|
|
|
(dd->events + uctxt_offset(uctxt)) & PAGE_MASK;
|
2015-07-31 02:17:43 +07:00
|
|
|
memlen = PAGE_SIZE;
|
|
|
|
/*
|
|
|
|
* v3.7 removes VM_RESERVED but the effect is kept by
|
|
|
|
* using VM_IO.
|
|
|
|
*/
|
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
vmf = 1;
|
|
|
|
break;
|
|
|
|
case STATUS:
|
2017-04-10 00:17:24 +07:00
|
|
|
if (flags & (unsigned long)(VM_WRITE | VM_EXEC)) {
|
|
|
|
ret = -EPERM;
|
|
|
|
goto done;
|
|
|
|
}
|
2015-07-31 02:17:43 +07:00
|
|
|
memaddr = kvirt_to_phys((void *)dd->status);
|
|
|
|
memlen = PAGE_SIZE;
|
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
break;
|
|
|
|
case RTAIL:
|
|
|
|
if (!HFI1_CAP_IS_USET(DMA_RTAIL)) {
|
|
|
|
/*
|
|
|
|
* If the memory allocation failed, the context alloc
|
|
|
|
* also would have failed, so we would never get here
|
|
|
|
*/
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto done;
|
|
|
|
}
|
2018-06-01 01:30:09 +07:00
|
|
|
if ((flags & VM_WRITE) || !uctxt->rcvhdrtail_kvaddr) {
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = -EPERM;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
memlen = PAGE_SIZE;
|
2016-09-06 18:35:54 +07:00
|
|
|
memvirt = (void *)uctxt->rcvhdrtail_kvaddr;
|
2015-07-31 02:17:43 +07:00
|
|
|
flags &= ~VM_MAYWRITE;
|
|
|
|
break;
|
|
|
|
case SUBCTXT_UREGS:
|
|
|
|
memaddr = (u64)uctxt->subctxt_uregbase;
|
|
|
|
memlen = PAGE_SIZE;
|
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
vmf = 1;
|
|
|
|
break;
|
|
|
|
case SUBCTXT_RCV_HDRQ:
|
|
|
|
memaddr = (u64)uctxt->subctxt_rcvhdr_base;
|
|
|
|
memlen = uctxt->rcvhdrq_size * uctxt->subctxt_cnt;
|
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
vmf = 1;
|
|
|
|
break;
|
|
|
|
case SUBCTXT_EGRBUF:
|
|
|
|
memaddr = (u64)uctxt->subctxt_rcvegrbuf;
|
|
|
|
memlen = uctxt->egrbufs.size * uctxt->subctxt_cnt;
|
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
flags &= ~VM_MAYWRITE;
|
|
|
|
vmf = 1;
|
|
|
|
break;
|
|
|
|
case SDMA_COMP: {
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_user_sdma_comp_q *cq = fd->cq;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2015-10-31 05:58:40 +07:00
|
|
|
if (!cq) {
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = -EFAULT;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
memaddr = (u64)cq->comps;
|
2016-03-05 00:15:00 +07:00
|
|
|
memlen = PAGE_ALIGN(sizeof(*cq->comps) * cq->nentries);
|
2015-07-31 02:17:43 +07:00
|
|
|
flags |= VM_IO | VM_DONTEXPAND;
|
|
|
|
vmf = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((vma->vm_end - vma->vm_start) != memlen) {
|
|
|
|
hfi1_cdbg(PROC, "%u:%u Memory size mismatch %lu:%lu",
|
2015-10-31 05:58:40 +07:00
|
|
|
uctxt->ctxt, fd->subctxt,
|
2015-07-31 02:17:43 +07:00
|
|
|
(vma->vm_end - vma->vm_start), memlen);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
vma->vm_flags = flags;
|
2015-11-07 08:06:56 +07:00
|
|
|
hfi1_cdbg(PROC,
|
|
|
|
"%u:%u type:%u io/vf:%d/%d, addr:0x%llx, len:%lu(%lu), flags:0x%lx\n",
|
|
|
|
ctxt, subctxt, type, mapio, vmf, memaddr, memlen,
|
2015-07-31 02:17:43 +07:00
|
|
|
vma->vm_end - vma->vm_start, vma->vm_flags);
|
|
|
|
if (vmf) {
|
2016-09-06 18:35:54 +07:00
|
|
|
vma->vm_pgoff = PFN_DOWN(memaddr);
|
2015-07-31 02:17:43 +07:00
|
|
|
vma->vm_ops = &vm_ops;
|
|
|
|
ret = 0;
|
|
|
|
} else if (mapio) {
|
2016-09-06 18:35:54 +07:00
|
|
|
ret = io_remap_pfn_range(vma, vma->vm_start,
|
|
|
|
PFN_DOWN(memaddr),
|
|
|
|
memlen,
|
2015-07-31 02:17:43 +07:00
|
|
|
vma->vm_page_prot);
|
2016-09-06 18:35:54 +07:00
|
|
|
} else if (memvirt) {
|
|
|
|
ret = remap_pfn_range(vma, vma->vm_start,
|
|
|
|
PFN_DOWN(__pa(memvirt)),
|
|
|
|
memlen,
|
|
|
|
vma->vm_page_prot);
|
2015-07-31 02:17:43 +07:00
|
|
|
} else {
|
2016-09-06 18:35:54 +07:00
|
|
|
ret = remap_pfn_range(vma, vma->vm_start,
|
|
|
|
PFN_DOWN(memaddr),
|
|
|
|
memlen,
|
2015-07-31 02:17:43 +07:00
|
|
|
vma->vm_page_prot);
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Local (non-chip) user memory is not mapped right away but as it is
|
|
|
|
* accessed by the user-level code.
|
|
|
|
*/
|
2018-04-17 21:23:58 +07:00
|
|
|
static vm_fault_t vma_fault(struct vm_fault *vmf)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
page = vmalloc_to_page((void *)(vmf->pgoff << PAGE_SHIFT));
|
|
|
|
if (!page)
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
|
|
|
|
get_page(page);
|
|
|
|
vmf->page = page;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-07-03 17:39:46 +07:00
|
|
|
static __poll_t hfi1_poll(struct file *fp, struct poll_table_struct *pt)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_ctxtdata *uctxt;
|
2017-07-03 17:39:46 +07:00
|
|
|
__poll_t pollflag;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2015-10-31 05:58:40 +07:00
|
|
|
uctxt = ((struct hfi1_filedata *)fp->private_data)->uctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
if (!uctxt)
|
2018-02-12 05:34:03 +07:00
|
|
|
pollflag = EPOLLERR;
|
2015-07-31 02:17:43 +07:00
|
|
|
else if (uctxt->poll_type == HFI1_POLL_TYPE_URGENT)
|
|
|
|
pollflag = poll_urgent(fp, pt);
|
|
|
|
else if (uctxt->poll_type == HFI1_POLL_TYPE_ANYRCV)
|
|
|
|
pollflag = poll_next(fp, pt);
|
|
|
|
else /* invalid */
|
2018-02-12 05:34:03 +07:00
|
|
|
pollflag = EPOLLERR;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
return pollflag;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hfi1_file_close(struct inode *inode, struct file *fp)
|
|
|
|
{
|
|
|
|
struct hfi1_filedata *fdata = fp->private_data;
|
|
|
|
struct hfi1_ctxtdata *uctxt = fdata->uctxt;
|
2016-05-19 19:26:44 +07:00
|
|
|
struct hfi1_devdata *dd = container_of(inode->i_cdev,
|
|
|
|
struct hfi1_devdata,
|
|
|
|
user_cdev);
|
2015-07-31 02:17:43 +07:00
|
|
|
unsigned long flags, *ev;
|
|
|
|
|
|
|
|
fp->private_data = NULL;
|
|
|
|
|
|
|
|
if (!uctxt)
|
|
|
|
goto done;
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
hfi1_cdbg(PROC, "closing ctxt %u:%u", uctxt->ctxt, fdata->subctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
flush_wc();
|
|
|
|
/* drain user sdma queue */
|
2017-07-29 22:43:32 +07:00
|
|
|
hfi1_user_sdma_free_queues(fdata, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2016-02-04 05:33:40 +07:00
|
|
|
/* release the cpu */
|
IB/hfi1: Refine user process affinity algorithm
When performing process affinity recommendations for MPI ranks, the current
algorithm doesn't take into account multiple HFI units. Also, real
cores and HT cores are not distinguished from one another. Therefore,
all HT cores are recommended to be assigned first within the local NUMA
node before recommending the assignments of cores in other NUMA nodes.
It's ideal to assign all real cores across all NUMA nodes first, then all
HT 1 cores, then all HT 2 cores, and so on to balance CPU workload. CPU
cores in other NUMA nodes could be running interrupt handlers, and this is
not taken into account.
To balance the CPU workload for user processes, the following
recommendation algorithm is used:
For each user process that is opening a context on HFI Y:
a) If all cores are assigned to user processes, start assignments all
over from the first core
b) Assign real cores first, then HT cores (First set of HT cores on
all physical cores, then second set of HT cores, and, so on) in the
following order:
1. Same NUMA node as HFI Y and not running an IRQ handler
2. Same NUMA node as HFI Y and running an IRQ handler
3. Different NUMA node to HFI Y and not running an IRQ handler
4. Different NUMA node to HFI Y and running an IRQ handler
c) Mark core as assigned in the global affinity structure. As user
processes are done, remove core assignments from global affinity
structure.
This implementation allows an arbitrary number of HT cores and provides
support for multiple HFIs.
This is being included in the kernel rather than user space due to the
fact that user space has no way of knowing the CPU recommendations for
contexts running as part of other jobs.
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Mitko Haralanov <mitko.haralanov@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sebastian Sanchez <sebastian.sanchez@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-07-25 21:54:57 +07:00
|
|
|
hfi1_put_proc_affinity(fdata->rec_cpu_num);
|
2016-02-04 05:33:40 +07:00
|
|
|
|
2017-05-04 19:14:34 +07:00
|
|
|
/* clean up rcv side */
|
|
|
|
hfi1_user_exp_rcv_free(fdata);
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
/*
|
|
|
|
* fdata->uctxt is used in the above cleanup. It is not ready to be
|
|
|
|
* removed until here.
|
|
|
|
*/
|
|
|
|
fdata->uctxt = NULL;
|
|
|
|
hfi1_rcd_put(uctxt);
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/*
|
|
|
|
* Clear any left over, unhandled events so the next process that
|
|
|
|
* gets this context doesn't get confused.
|
|
|
|
*/
|
2017-09-26 21:00:56 +07:00
|
|
|
ev = dd->events + uctxt_offset(uctxt) + fdata->subctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
*ev = 0;
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
spin_lock_irqsave(&dd->uctxt_lock, flags);
|
2017-05-04 19:15:15 +07:00
|
|
|
__clear_bit(fdata->subctxt, uctxt->in_use_ctxts);
|
|
|
|
if (!bitmap_empty(uctxt->in_use_ctxts, HFI1_MAX_SHARED_CTXTS)) {
|
2017-08-05 03:52:44 +07:00
|
|
|
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
|
2015-07-31 02:17:43 +07:00
|
|
|
goto done;
|
|
|
|
}
|
2017-08-05 03:52:44 +07:00
|
|
|
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Disable receive context and interrupt available, reset all
|
|
|
|
* RcvCtxtCtrl bits to default values.
|
|
|
|
*/
|
|
|
|
hfi1_rcvctrl(dd, HFI1_RCVCTRL_CTXT_DIS |
|
|
|
|
HFI1_RCVCTRL_TIDFLOW_DIS |
|
|
|
|
HFI1_RCVCTRL_INTRAVAIL_DIS |
|
2016-02-04 05:32:49 +07:00
|
|
|
HFI1_RCVCTRL_TAILUPD_DIS |
|
2015-07-31 02:17:43 +07:00
|
|
|
HFI1_RCVCTRL_ONE_PKT_EGR_DIS |
|
|
|
|
HFI1_RCVCTRL_NO_RHQ_DROP_DIS |
|
2017-07-24 21:46:06 +07:00
|
|
|
HFI1_RCVCTRL_NO_EGR_DROP_DIS, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
/* Clear the context's J_KEY */
|
2017-07-24 21:46:01 +07:00
|
|
|
hfi1_clear_ctxt_jkey(dd, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
/*
|
2017-06-10 06:00:19 +07:00
|
|
|
* If a send context is allocated, reset context integrity
|
|
|
|
* checks to default and disable the send context.
|
2015-07-31 02:17:43 +07:00
|
|
|
*/
|
2017-06-10 06:00:19 +07:00
|
|
|
if (uctxt->sc) {
|
|
|
|
sc_disable(uctxt->sc);
|
2018-05-02 20:43:07 +07:00
|
|
|
set_pio_integrity(uctxt->sc);
|
2017-06-10 06:00:19 +07:00
|
|
|
}
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-06-10 05:59:40 +07:00
|
|
|
hfi1_free_ctxt_rcv_groups(uctxt);
|
2017-05-04 19:15:03 +07:00
|
|
|
hfi1_clear_ctxt_pkey(dd, uctxt);
|
2016-04-20 20:05:36 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->event_flags = 0;
|
2017-07-24 21:45:43 +07:00
|
|
|
|
|
|
|
deallocate_ctxt(uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
done:
|
2016-08-17 03:27:03 +07:00
|
|
|
mmdrop(fdata->mm);
|
2016-05-19 19:26:44 +07:00
|
|
|
kobject_put(&dd->kobj);
|
2016-10-25 22:57:55 +07:00
|
|
|
|
|
|
|
if (atomic_dec_and_test(&dd->user_refcount))
|
|
|
|
complete(&dd->user_comp);
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
kfree(fdata);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Convert kernel *virtual* addresses to physical addresses.
|
|
|
|
* This is used to vmalloc'ed addresses.
|
|
|
|
*/
|
|
|
|
static u64 kvirt_to_phys(void *addr)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
u64 paddr = 0;
|
|
|
|
|
|
|
|
page = vmalloc_to_page(addr);
|
|
|
|
if (page)
|
|
|
|
paddr = page_to_pfn(page) << PAGE_SHIFT;
|
|
|
|
|
|
|
|
return paddr;
|
|
|
|
}
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
/**
|
|
|
|
* complete_subctxt
|
|
|
|
* @fd: valid filedata pointer
|
|
|
|
*
|
|
|
|
* Sub-context info can only be set up after the base context
|
|
|
|
* has been completed. This is indicated by the clearing of the
|
|
|
|
* HFI1_CTXT_BASE_UINIT bit.
|
|
|
|
*
|
|
|
|
* Wait for the bit to be cleared, and then complete the subcontext
|
|
|
|
* initialization.
|
|
|
|
*
|
|
|
|
*/
|
2017-08-05 03:52:38 +07:00
|
|
|
static int complete_subctxt(struct hfi1_filedata *fd)
|
|
|
|
{
|
|
|
|
int ret;
|
2017-08-05 03:52:44 +07:00
|
|
|
unsigned long flags;
|
2017-08-05 03:52:38 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* sub-context info can only be set up after the base context
|
|
|
|
* has been completed.
|
|
|
|
*/
|
|
|
|
ret = wait_event_interruptible(
|
|
|
|
fd->uctxt->wait,
|
|
|
|
!test_bit(HFI1_CTXT_BASE_UNINIT, &fd->uctxt->event_flags));
|
|
|
|
|
|
|
|
if (test_bit(HFI1_CTXT_BASE_FAILED, &fd->uctxt->event_flags))
|
|
|
|
ret = -ENOMEM;
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
/* Finish the sub-context init */
|
2017-08-05 03:52:38 +07:00
|
|
|
if (!ret) {
|
|
|
|
fd->rec_cpu_num = hfi1_get_proc_affinity(fd->uctxt->numa_id);
|
|
|
|
ret = init_user_ctxt(fd, fd->uctxt);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
2017-08-05 03:52:44 +07:00
|
|
|
spin_lock_irqsave(&fd->dd->uctxt_lock, flags);
|
2017-08-05 03:52:38 +07:00
|
|
|
__clear_bit(fd->subctxt, fd->uctxt->in_use_ctxts);
|
2017-08-05 03:52:44 +07:00
|
|
|
spin_unlock_irqrestore(&fd->dd->uctxt_lock, flags);
|
2018-01-10 03:03:46 +07:00
|
|
|
hfi1_rcd_put(fd->uctxt);
|
|
|
|
fd->uctxt = NULL;
|
2017-08-05 03:52:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-09-26 21:03:50 +07:00
|
|
|
static int assign_ctxt(struct hfi1_filedata *fd, unsigned long arg, u32 len)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
2017-05-04 19:15:21 +07:00
|
|
|
int ret;
|
2017-10-12 00:48:58 +07:00
|
|
|
unsigned int swmajor;
|
2017-07-29 22:43:32 +07:00
|
|
|
struct hfi1_ctxtdata *uctxt = NULL;
|
2017-09-26 21:03:50 +07:00
|
|
|
struct hfi1_user_info uinfo;
|
|
|
|
|
|
|
|
if (fd->uctxt)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (sizeof(uinfo) != len)
|
|
|
|
return -EINVAL;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:03:50 +07:00
|
|
|
if (copy_from_user(&uinfo, (void __user *)arg, sizeof(uinfo)))
|
|
|
|
return -EFAULT;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:03:50 +07:00
|
|
|
swmajor = uinfo.userversion >> 16;
|
2017-05-04 19:15:09 +07:00
|
|
|
if (swmajor != HFI1_USER_SWMAJOR)
|
|
|
|
return -ENODEV;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:03:50 +07:00
|
|
|
if (uinfo.subctxt_cnt > HFI1_MAX_SHARED_CTXTS)
|
2017-08-05 03:52:38 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Acquire the mutex to protect against multiple creations of what
|
|
|
|
* could be a shared base context.
|
|
|
|
*/
|
2015-07-31 02:17:43 +07:00
|
|
|
mutex_lock(&hfi1_mutex);
|
2017-05-04 19:15:21 +07:00
|
|
|
/*
|
2017-08-05 03:52:38 +07:00
|
|
|
* Get a sub context if available (fd->uctxt will be set).
|
2017-05-04 19:15:21 +07:00
|
|
|
* ret < 0 error, 0 no context, 1 sub-context found
|
|
|
|
*/
|
2017-09-26 21:03:50 +07:00
|
|
|
ret = find_sub_ctxt(fd, &uinfo);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
/*
|
2017-08-05 03:52:38 +07:00
|
|
|
* Allocate a base context if context sharing is not required or a
|
|
|
|
* sub context wasn't found.
|
2015-07-31 02:17:43 +07:00
|
|
|
*/
|
2017-05-04 19:14:57 +07:00
|
|
|
if (!ret)
|
2017-09-26 21:03:50 +07:00
|
|
|
ret = allocate_ctxt(fd, fd->dd, &uinfo, &uctxt);
|
2017-05-04 19:14:57 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
mutex_unlock(&hfi1_mutex);
|
2017-05-04 19:15:09 +07:00
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
/* Depending on the context type, finish the appropriate init */
|
2017-08-05 03:52:38 +07:00
|
|
|
switch (ret) {
|
|
|
|
case 0:
|
2017-07-29 22:43:32 +07:00
|
|
|
ret = setup_base_ctxt(fd, uctxt);
|
2017-09-26 20:06:28 +07:00
|
|
|
if (ret)
|
|
|
|
deallocate_ctxt(uctxt);
|
2017-08-05 03:52:38 +07:00
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
ret = complete_subctxt(fd);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
2017-05-04 19:15:09 +07:00
|
|
|
}
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
/**
|
|
|
|
* match_ctxt
|
|
|
|
* @fd: valid filedata pointer
|
|
|
|
* @uinfo: user info to compare base context with
|
|
|
|
* @uctxt: context to compare uinfo to.
|
|
|
|
*
|
|
|
|
* Compare the given context with the given information to see if it
|
|
|
|
* can be used for a sub context.
|
|
|
|
*/
|
|
|
|
static int match_ctxt(struct hfi1_filedata *fd,
|
|
|
|
const struct hfi1_user_info *uinfo,
|
|
|
|
struct hfi1_ctxtdata *uctxt)
|
|
|
|
{
|
|
|
|
struct hfi1_devdata *dd = fd->dd;
|
|
|
|
unsigned long flags;
|
|
|
|
u16 subctxt;
|
|
|
|
|
|
|
|
/* Skip dynamically allocated kernel contexts */
|
|
|
|
if (uctxt->sc && (uctxt->sc->type == SC_KERNEL))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Skip ctxt if it doesn't match the requested one */
|
|
|
|
if (memcmp(uctxt->uuid, uinfo->uuid, sizeof(uctxt->uuid)) ||
|
|
|
|
uctxt->jkey != generate_jkey(current_uid()) ||
|
|
|
|
uctxt->subctxt_id != uinfo->subctxt_id ||
|
|
|
|
uctxt->subctxt_cnt != uinfo->subctxt_cnt)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Verify the sharing process matches the base */
|
|
|
|
if (uctxt->userversion != uinfo->userversion)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* Find an unused sub context */
|
|
|
|
spin_lock_irqsave(&dd->uctxt_lock, flags);
|
|
|
|
if (bitmap_empty(uctxt->in_use_ctxts, HFI1_MAX_SHARED_CTXTS)) {
|
|
|
|
/* context is being closed, do not use */
|
|
|
|
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
subctxt = find_first_zero_bit(uctxt->in_use_ctxts,
|
|
|
|
HFI1_MAX_SHARED_CTXTS);
|
|
|
|
if (subctxt >= uctxt->subctxt_cnt) {
|
|
|
|
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
fd->subctxt = subctxt;
|
|
|
|
__set_bit(fd->subctxt, uctxt->in_use_ctxts);
|
|
|
|
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
|
|
|
|
|
|
|
|
fd->uctxt = uctxt;
|
|
|
|
hfi1_rcd_get(uctxt);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* find_sub_ctxt
|
|
|
|
* @fd: valid filedata pointer
|
|
|
|
* @uinfo: matching info to use to find a possible context to share.
|
|
|
|
*
|
2017-05-04 19:15:15 +07:00
|
|
|
* The hfi1_mutex must be held when this function is called. It is
|
2017-08-05 03:52:38 +07:00
|
|
|
* necessary to ensure serialized creation of shared contexts.
|
2017-08-05 03:52:44 +07:00
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* 0 No sub-context found
|
|
|
|
* 1 Subcontext found and allocated
|
|
|
|
* errno EINVAL (incorrect parameters)
|
|
|
|
* EBUSY (all sub contexts in use)
|
2017-05-04 19:15:15 +07:00
|
|
|
*/
|
2017-05-04 19:15:09 +07:00
|
|
|
static int find_sub_ctxt(struct hfi1_filedata *fd,
|
|
|
|
const struct hfi1_user_info *uinfo)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
2017-08-05 03:52:44 +07:00
|
|
|
struct hfi1_ctxtdata *uctxt;
|
2017-05-04 19:14:57 +07:00
|
|
|
struct hfi1_devdata *dd = fd->dd;
|
2017-08-05 03:52:44 +07:00
|
|
|
u16 i;
|
|
|
|
int ret;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-08-05 03:52:38 +07:00
|
|
|
if (!uinfo->subctxt_cnt)
|
|
|
|
return 0;
|
|
|
|
|
2017-05-04 19:14:57 +07:00
|
|
|
for (i = dd->first_dyn_alloc_ctxt; i < dd->num_rcv_contexts; i++) {
|
2017-08-05 03:52:44 +07:00
|
|
|
uctxt = hfi1_rcd_get_by_index(dd, i);
|
|
|
|
if (uctxt) {
|
|
|
|
ret = match_ctxt(fd, uinfo, uctxt);
|
|
|
|
hfi1_rcd_put(uctxt);
|
|
|
|
/* value of != 0 will return */
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-05-04 19:14:57 +07:00
|
|
|
return 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-05-04 19:14:45 +07:00
|
|
|
static int allocate_ctxt(struct hfi1_filedata *fd, struct hfi1_devdata *dd,
|
2017-07-29 22:43:32 +07:00
|
|
|
struct hfi1_user_info *uinfo,
|
2017-08-05 03:52:44 +07:00
|
|
|
struct hfi1_ctxtdata **rcd)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_ctxtdata *uctxt;
|
2016-02-04 05:33:40 +07:00
|
|
|
int ret, numa;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
if (dd->flags & HFI1_FROZEN) {
|
|
|
|
/*
|
|
|
|
* Pick an error that is unique from all other errors
|
|
|
|
* that are returned so the user process knows that
|
|
|
|
* it tried to allocate while the SPC was frozen. It
|
|
|
|
* it should be able to retry with success in a short
|
|
|
|
* while.
|
|
|
|
*/
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2017-05-04 19:14:57 +07:00
|
|
|
if (!dd->freectxts)
|
|
|
|
return -EBUSY;
|
|
|
|
|
IB/hfi1: Refine user process affinity algorithm
When performing process affinity recommendations for MPI ranks, the current
algorithm doesn't take into account multiple HFI units. Also, real
cores and HT cores are not distinguished from one another. Therefore,
all HT cores are recommended to be assigned first within the local NUMA
node before recommending the assignments of cores in other NUMA nodes.
It's ideal to assign all real cores across all NUMA nodes first, then all
HT 1 cores, then all HT 2 cores, and so on to balance CPU workload. CPU
cores in other NUMA nodes could be running interrupt handlers, and this is
not taken into account.
To balance the CPU workload for user processes, the following
recommendation algorithm is used:
For each user process that is opening a context on HFI Y:
a) If all cores are assigned to user processes, start assignments all
over from the first core
b) Assign real cores first, then HT cores (First set of HT cores on
all physical cores, then second set of HT cores, and, so on) in the
following order:
1. Same NUMA node as HFI Y and not running an IRQ handler
2. Same NUMA node as HFI Y and running an IRQ handler
3. Different NUMA node to HFI Y and not running an IRQ handler
4. Different NUMA node to HFI Y and running an IRQ handler
c) Mark core as assigned in the global affinity structure. As user
processes are done, remove core assignments from global affinity
structure.
This implementation allows an arbitrary number of HT cores and provides
support for multiple HFIs.
This is being included in the kernel rather than user space due to the
fact that user space has no way of knowing the CPU recommendations for
contexts running as part of other jobs.
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Mitko Haralanov <mitko.haralanov@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sebastian Sanchez <sebastian.sanchez@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-07-25 21:54:57 +07:00
|
|
|
/*
|
|
|
|
* If we don't have a NUMA node requested, preference is towards
|
|
|
|
* device NUMA node.
|
|
|
|
*/
|
|
|
|
fd->rec_cpu_num = hfi1_get_proc_affinity(dd->node);
|
2016-02-04 05:33:40 +07:00
|
|
|
if (fd->rec_cpu_num != -1)
|
|
|
|
numa = cpu_to_node(fd->rec_cpu_num);
|
|
|
|
else
|
|
|
|
numa = numa_node_id();
|
2017-08-05 03:52:38 +07:00
|
|
|
ret = hfi1_create_ctxtdata(dd->pport, numa, &uctxt);
|
|
|
|
if (ret < 0) {
|
|
|
|
dd_dev_err(dd, "user ctxtdata allocation failed\n");
|
|
|
|
return ret;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
2016-02-04 05:33:40 +07:00
|
|
|
hfi1_cdbg(PROC, "[%u:%u] pid %u assigned to CPU %d (NUMA %u)",
|
|
|
|
uctxt->ctxt, fd->subctxt, current->pid, fd->rec_cpu_num,
|
|
|
|
uctxt->numa_id);
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/*
|
|
|
|
* Allocate and enable a PIO send context.
|
|
|
|
*/
|
2017-08-05 03:52:38 +07:00
|
|
|
uctxt->sc = sc_alloc(dd, SC_USER, uctxt->rcvhdrqentsize, dd->node);
|
2016-09-25 21:42:23 +07:00
|
|
|
if (!uctxt->sc) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto ctxdata_free;
|
|
|
|
}
|
2015-11-07 08:06:56 +07:00
|
|
|
hfi1_cdbg(PROC, "allocated send context %u(%u)\n", uctxt->sc->sw_index,
|
|
|
|
uctxt->sc->hw_context);
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = sc_enable(uctxt->sc);
|
|
|
|
if (ret)
|
2016-09-25 21:42:23 +07:00
|
|
|
goto ctxdata_free;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/*
|
2017-08-05 03:52:38 +07:00
|
|
|
* Setup sub context information if the user-level has requested
|
2017-05-04 19:15:09 +07:00
|
|
|
* sub contexts.
|
2015-07-31 02:17:43 +07:00
|
|
|
* This has to be done here so the rest of the sub-contexts find the
|
2017-08-05 03:52:38 +07:00
|
|
|
* proper base context.
|
2015-07-31 02:17:43 +07:00
|
|
|
*/
|
2017-08-05 03:52:38 +07:00
|
|
|
if (uinfo->subctxt_cnt)
|
|
|
|
init_subctxts(uctxt, uinfo);
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->userversion = uinfo->userversion;
|
2016-07-29 02:21:13 +07:00
|
|
|
uctxt->flags = hfi1_cap_mask; /* save current flag state */
|
2015-07-31 02:17:43 +07:00
|
|
|
init_waitqueue_head(&uctxt->wait);
|
|
|
|
strlcpy(uctxt->comm, current->comm, sizeof(uctxt->comm));
|
|
|
|
memcpy(uctxt->uuid, uinfo->uuid, sizeof(uctxt->uuid));
|
|
|
|
uctxt->jkey = generate_jkey(current_uid());
|
|
|
|
hfi1_stats.sps_ctxts++;
|
staging/rdma/hfi1: Add support for enabling/disabling PCIe ASPM
hfi1 HW has a high PCIe ASPM L1 exit latency and also advertises an
acceptable latency less than actual ASPM latencies. Additional
mechanisms than those provided by BIOS/OS are therefore required to
enable/disable ASPM for hfi1 to provide acceptable power/performance
trade offs. This patch adds this support.
By means of a module parameter ASPM can be either (a) always enabled
(power save mode) (b) always disabled (performance mode) (c)
enabled/disabled dynamically. The dynamic mode implements two
heuristics to alleviate possible problems with high ASPM L1 exit
latency. ASPM is normally enabled but is disabled if (a) there are any
active user space PSM contexts, or (b) for verbs, ASPM is disabled as
interrupt activity for a context starts to increase.
A few more points about the verbs implementation. In order to reduce
lock/cache contention between multiple verbs contexts, some processing
is done at the context layer before contending for device layer
locks. ASPM is disabled when two interrupts for a context happen
within 1 millisec. A timer is scheduled which will re-enable ASPM
after 1 second should the interrupt activity cease. Normally, every
interrupt, or interrupt-pair should push the timer out
further. However, since this might increase the processing load per
interrupt, pushing the timer out is postponed for half a second. If
after half a second we get two interrupts within 1 millisec the timer
is pushed out by another second.
Finally, the kernel ASPM API is not used in this patch. This is
because this patch does several non-standard things as SW workarounds
for HW issues. As mentioned above, it enables ASPM even when advertised
actual latencies are greater than acceptable latencies. Also, whereas
the kernel API only allows drivers to disable ASPM from driver probe,
this patch enables/disables ASPM directly from interrupt context. Due
to these reasons the kernel ASPM API was not used.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dean Luick <dean.luick@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-02-04 05:33:06 +07:00
|
|
|
/*
|
|
|
|
* Disable ASPM when there are open user/PSM contexts to avoid
|
|
|
|
* issues with ASPM L1 exit latency
|
|
|
|
*/
|
|
|
|
if (dd->freectxts-- == dd->num_user_contexts)
|
|
|
|
aspm_disable_all(dd);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
*rcd = uctxt;
|
2017-06-10 06:00:19 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return 0;
|
2016-09-25 21:42:23 +07:00
|
|
|
|
|
|
|
ctxdata_free:
|
2017-08-05 03:52:44 +07:00
|
|
|
hfi1_free_ctxt(uctxt);
|
2016-09-25 21:42:23 +07:00
|
|
|
return ret;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-07-24 21:45:43 +07:00
|
|
|
static void deallocate_ctxt(struct hfi1_ctxtdata *uctxt)
|
|
|
|
{
|
|
|
|
mutex_lock(&hfi1_mutex);
|
|
|
|
hfi1_stats.sps_ctxts--;
|
|
|
|
if (++uctxt->dd->freectxts == uctxt->dd->num_user_contexts)
|
|
|
|
aspm_enable_all(uctxt->dd);
|
|
|
|
mutex_unlock(&hfi1_mutex);
|
2017-08-05 03:52:38 +07:00
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
hfi1_free_ctxt(uctxt);
|
2017-07-24 21:45:43 +07:00
|
|
|
}
|
|
|
|
|
2017-08-05 03:52:38 +07:00
|
|
|
static void init_subctxts(struct hfi1_ctxtdata *uctxt,
|
|
|
|
const struct hfi1_user_info *uinfo)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
uctxt->subctxt_cnt = uinfo->subctxt_cnt;
|
|
|
|
uctxt->subctxt_id = uinfo->subctxt_id;
|
2017-05-04 19:15:09 +07:00
|
|
|
set_bit(HFI1_CTXT_BASE_UNINIT, &uctxt->event_flags);
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int setup_subctxt(struct hfi1_ctxtdata *uctxt)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
2017-05-04 19:15:15 +07:00
|
|
|
u16 num_subctxts = uctxt->subctxt_cnt;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
uctxt->subctxt_uregbase = vmalloc_user(PAGE_SIZE);
|
2017-05-04 19:15:09 +07:00
|
|
|
if (!uctxt->subctxt_uregbase)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/* We can take the size of the RcvHdr Queue from the master */
|
|
|
|
uctxt->subctxt_rcvhdr_base = vmalloc_user(uctxt->rcvhdrq_size *
|
|
|
|
num_subctxts);
|
|
|
|
if (!uctxt->subctxt_rcvhdr_base) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto bail_ureg;
|
|
|
|
}
|
|
|
|
|
|
|
|
uctxt->subctxt_rcvegrbuf = vmalloc_user(uctxt->egrbufs.size *
|
|
|
|
num_subctxts);
|
|
|
|
if (!uctxt->subctxt_rcvegrbuf) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto bail_rhdr;
|
|
|
|
}
|
2017-05-04 19:15:09 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
bail_rhdr:
|
|
|
|
vfree(uctxt->subctxt_rcvhdr_base);
|
2017-05-04 19:15:09 +07:00
|
|
|
uctxt->subctxt_rcvhdr_base = NULL;
|
2015-07-31 02:17:43 +07:00
|
|
|
bail_ureg:
|
|
|
|
vfree(uctxt->subctxt_uregbase);
|
|
|
|
uctxt->subctxt_uregbase = NULL;
|
2017-05-04 19:15:09 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-05-04 19:15:21 +07:00
|
|
|
static void user_init(struct hfi1_ctxtdata *uctxt)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
unsigned int rcvctrl_ops = 0;
|
|
|
|
|
|
|
|
/* initialize poll variables... */
|
|
|
|
uctxt->urgent = 0;
|
|
|
|
uctxt->urgent_poll = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now enable the ctxt for receive.
|
|
|
|
* For chips that are set to DMA the tail register to memory
|
|
|
|
* when they change (and when the update bit transitions from
|
|
|
|
* 0 to 1. So for those chips, we turn it off and then back on.
|
|
|
|
* This will (very briefly) affect any other open ctxts, but the
|
|
|
|
* duration is very short, and therefore isn't an issue. We
|
|
|
|
* explicitly set the in-memory tail copy to 0 beforehand, so we
|
|
|
|
* don't have to wait to be sure the DMA update has happened
|
|
|
|
* (chip resets head/tail to 0 on transition to enable).
|
|
|
|
*/
|
|
|
|
if (uctxt->rcvhdrtail_kvaddr)
|
|
|
|
clear_rcvhdrtail(uctxt);
|
|
|
|
|
|
|
|
/* Setup J_KEY before enabling the context */
|
2017-07-24 21:46:01 +07:00
|
|
|
hfi1_set_ctxt_jkey(uctxt->dd, uctxt, uctxt->jkey);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
rcvctrl_ops = HFI1_RCVCTRL_CTXT_ENB;
|
2016-07-29 02:21:13 +07:00
|
|
|
if (HFI1_CAP_UGET_MASK(uctxt->flags, HDRSUPP))
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_TIDFLOW_ENB;
|
|
|
|
/*
|
|
|
|
* Ignore the bit in the flags for now until proper
|
|
|
|
* support for multiple packet per rcv array entry is
|
|
|
|
* added.
|
|
|
|
*/
|
2016-07-29 02:21:13 +07:00
|
|
|
if (!HFI1_CAP_UGET_MASK(uctxt->flags, MULTI_PKT_EGR))
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_ONE_PKT_EGR_ENB;
|
2016-07-29 02:21:13 +07:00
|
|
|
if (HFI1_CAP_UGET_MASK(uctxt->flags, NODROP_EGR_FULL))
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_NO_EGR_DROP_ENB;
|
2016-07-29 02:21:13 +07:00
|
|
|
if (HFI1_CAP_UGET_MASK(uctxt->flags, NODROP_RHQ_FULL))
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_NO_RHQ_DROP_ENB;
|
2016-02-04 05:32:49 +07:00
|
|
|
/*
|
|
|
|
* The RcvCtxtCtrl.TailUpd bit has to be explicitly written.
|
|
|
|
* We can't rely on the correct value to be set from prior
|
|
|
|
* uses of the chip or ctxt. Therefore, add the rcvctrl op
|
|
|
|
* for both cases.
|
|
|
|
*/
|
2016-07-29 02:21:13 +07:00
|
|
|
if (HFI1_CAP_UGET_MASK(uctxt->flags, DMA_RTAIL))
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_TAILUPD_ENB;
|
2016-02-04 05:32:49 +07:00
|
|
|
else
|
|
|
|
rcvctrl_ops |= HFI1_RCVCTRL_TAILUPD_DIS;
|
2017-07-24 21:46:06 +07:00
|
|
|
hfi1_rcvctrl(uctxt->dd, rcvctrl_ops, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-09-26 21:03:57 +07:00
|
|
|
static int get_ctxt_info(struct hfi1_filedata *fd, unsigned long arg, u32 len)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_ctxt_info cinfo;
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
2017-09-26 21:03:57 +07:00
|
|
|
|
|
|
|
if (sizeof(cinfo) != len)
|
|
|
|
return -EINVAL;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2015-09-16 13:42:25 +07:00
|
|
|
memset(&cinfo, 0, sizeof(cinfo));
|
2016-07-29 02:21:13 +07:00
|
|
|
cinfo.runtime_flags = (((uctxt->flags >> HFI1_CAP_MISC_SHIFT) &
|
|
|
|
HFI1_CAP_MISC_MASK) << HFI1_CAP_USER_SHIFT) |
|
|
|
|
HFI1_CAP_UGET_MASK(uctxt->flags, MASK) |
|
|
|
|
HFI1_CAP_KGET_MASK(uctxt->flags, K2U);
|
2016-07-29 02:21:21 +07:00
|
|
|
/* adjust flag if this fd is not able to cache */
|
|
|
|
if (!fd->handler)
|
|
|
|
cinfo.runtime_flags |= HFI1_CAP_TID_UNMAP; /* no caching */
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
cinfo.num_active = hfi1_count_active_units();
|
|
|
|
cinfo.unit = uctxt->dd->unit;
|
|
|
|
cinfo.ctxt = uctxt->ctxt;
|
2015-10-31 05:58:40 +07:00
|
|
|
cinfo.subctxt = fd->subctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
cinfo.rcvtids = roundup(uctxt->egrbufs.alloced,
|
|
|
|
uctxt->dd->rcv_entries.group_size) +
|
|
|
|
uctxt->expected_count;
|
|
|
|
cinfo.credits = uctxt->sc->credits;
|
|
|
|
cinfo.numa_node = uctxt->numa_id;
|
|
|
|
cinfo.rec_cpu = fd->rec_cpu_num;
|
|
|
|
cinfo.send_ctxt = uctxt->sc->hw_context;
|
|
|
|
|
|
|
|
cinfo.egrtids = uctxt->egrbufs.alloced;
|
|
|
|
cinfo.rcvhdrq_cnt = uctxt->rcvhdrq_cnt;
|
|
|
|
cinfo.rcvhdrq_entsize = uctxt->rcvhdrqentsize << 2;
|
2015-10-31 05:58:40 +07:00
|
|
|
cinfo.sdma_ring_size = fd->cq->nentries;
|
2015-07-31 02:17:43 +07:00
|
|
|
cinfo.rcvegr_size = uctxt->egrbufs.rcvtid_size;
|
|
|
|
|
2018-03-29 02:05:32 +07:00
|
|
|
trace_hfi1_ctxt_info(uctxt->dd, uctxt->ctxt, fd->subctxt, &cinfo);
|
2017-09-26 21:03:57 +07:00
|
|
|
if (copy_to_user((void __user *)arg, &cinfo, len))
|
|
|
|
return -EFAULT;
|
2016-07-29 02:21:13 +07:00
|
|
|
|
2017-09-26 21:03:57 +07:00
|
|
|
return 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-07-29 22:43:32 +07:00
|
|
|
static int init_user_ctxt(struct hfi1_filedata *fd,
|
|
|
|
struct hfi1_ctxtdata *uctxt)
|
2017-05-04 19:15:09 +07:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = hfi1_user_sdma_alloc_queues(uctxt, fd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2017-07-29 22:43:32 +07:00
|
|
|
ret = hfi1_user_exp_rcv_init(fd, uctxt);
|
|
|
|
if (ret)
|
|
|
|
hfi1_user_sdma_free_queues(fd, uctxt);
|
2017-05-04 19:15:09 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-07-29 22:43:32 +07:00
|
|
|
static int setup_base_ctxt(struct hfi1_filedata *fd,
|
|
|
|
struct hfi1_ctxtdata *uctxt)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
|
|
|
int ret = 0;
|
|
|
|
|
2017-05-04 19:15:09 +07:00
|
|
|
hfi1_init_ctxt(uctxt->sc);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-05-04 19:15:09 +07:00
|
|
|
/* Now allocate the RcvHdr queue and eager buffers. */
|
|
|
|
ret = hfi1_create_rcvhdrq(dd, uctxt);
|
|
|
|
if (ret)
|
2017-09-26 20:06:28 +07:00
|
|
|
goto done;
|
2016-04-20 20:05:36 +07:00
|
|
|
|
2017-05-04 19:15:09 +07:00
|
|
|
ret = hfi1_setup_eagerbufs(uctxt);
|
2016-04-20 20:05:36 +07:00
|
|
|
if (ret)
|
2017-09-26 20:06:28 +07:00
|
|
|
goto done;
|
2017-05-04 19:15:09 +07:00
|
|
|
|
|
|
|
/* If sub-contexts are enabled, do the appropriate setup */
|
|
|
|
if (uctxt->subctxt_cnt)
|
|
|
|
ret = setup_subctxt(uctxt);
|
|
|
|
if (ret)
|
2017-09-26 20:06:28 +07:00
|
|
|
goto done;
|
2017-05-04 19:15:09 +07:00
|
|
|
|
2017-06-10 05:59:40 +07:00
|
|
|
ret = hfi1_alloc_ctxt_rcv_groups(uctxt);
|
2017-05-04 19:15:09 +07:00
|
|
|
if (ret)
|
2017-09-26 20:06:28 +07:00
|
|
|
goto done;
|
2017-05-04 19:15:09 +07:00
|
|
|
|
2017-07-29 22:43:32 +07:00
|
|
|
ret = init_user_ctxt(fd, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
if (ret)
|
2017-09-26 20:06:28 +07:00
|
|
|
goto done;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-05-04 19:15:21 +07:00
|
|
|
user_init(uctxt);
|
|
|
|
|
2017-07-29 22:43:32 +07:00
|
|
|
/* Now that the context is set up, the fd can get a reference. */
|
|
|
|
fd->uctxt = uctxt;
|
|
|
|
hfi1_rcd_get(uctxt);
|
|
|
|
|
2017-09-26 20:06:28 +07:00
|
|
|
done:
|
|
|
|
if (uctxt->subctxt_cnt) {
|
|
|
|
/*
|
|
|
|
* On error, set the failed bit so sub-contexts will clean up
|
|
|
|
* correctly.
|
|
|
|
*/
|
|
|
|
if (ret)
|
|
|
|
set_bit(HFI1_CTXT_BASE_FAILED, &uctxt->event_flags);
|
2017-05-04 19:15:21 +07:00
|
|
|
|
2017-09-26 20:06:28 +07:00
|
|
|
/*
|
|
|
|
* Base context is done (successfully or not), notify anybody
|
|
|
|
* using a sub-context that is waiting for this completion.
|
|
|
|
*/
|
|
|
|
clear_bit(HFI1_CTXT_BASE_UNINIT, &uctxt->event_flags);
|
|
|
|
wake_up(&uctxt->wait);
|
|
|
|
}
|
2017-08-05 03:52:44 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:10 +07:00
|
|
|
static int get_base_info(struct hfi1_filedata *fd, unsigned long arg, u32 len)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_base_info binfo;
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
|
|
|
unsigned offset;
|
|
|
|
|
2017-05-04 19:15:09 +07:00
|
|
|
trace_hfi1_uctxtdata(uctxt->dd, uctxt, fd->subctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:04:10 +07:00
|
|
|
if (sizeof(binfo) != len)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
memset(&binfo, 0, sizeof(binfo));
|
|
|
|
binfo.hw_version = dd->revision;
|
|
|
|
binfo.sw_version = HFI1_KERN_SWVERSION;
|
|
|
|
binfo.bthqp = kdeth_qp;
|
|
|
|
binfo.jkey = uctxt->jkey;
|
|
|
|
/*
|
|
|
|
* If more than 64 contexts are enabled the allocated credit
|
|
|
|
* return will span two or three contiguous pages. Since we only
|
|
|
|
* map the page containing the context's credit return address,
|
|
|
|
* we need to calculate the offset in the proper page.
|
|
|
|
*/
|
|
|
|
offset = ((u64)uctxt->sc->hw_free -
|
|
|
|
(u64)dd->cr_base[uctxt->numa_id].va) % PAGE_SIZE;
|
|
|
|
binfo.sc_credits_addr = HFI1_MMAP_TOKEN(PIO_CRED, uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt, offset);
|
2015-07-31 02:17:43 +07:00
|
|
|
binfo.pio_bufbase = HFI1_MMAP_TOKEN(PIO_BUFS, uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt,
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->sc->base_addr);
|
|
|
|
binfo.pio_bufbase_sop = HFI1_MMAP_TOKEN(PIO_BUFS_SOP,
|
|
|
|
uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt,
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->sc->base_addr);
|
|
|
|
binfo.rcvhdr_bufbase = HFI1_MMAP_TOKEN(RCV_HDRQ, uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt,
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->rcvhdrq);
|
|
|
|
binfo.rcvegr_bufbase = HFI1_MMAP_TOKEN(RCV_EGRBUF, uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt,
|
2016-09-06 18:35:54 +07:00
|
|
|
uctxt->egrbufs.rcvtids[0].dma);
|
2015-07-31 02:17:43 +07:00
|
|
|
binfo.sdma_comp_bufbase = HFI1_MMAP_TOKEN(SDMA_COMP, uctxt->ctxt,
|
2017-09-26 21:04:03 +07:00
|
|
|
fd->subctxt, 0);
|
2015-07-31 02:17:43 +07:00
|
|
|
/*
|
|
|
|
* user regs are at
|
|
|
|
* (RXE_PER_CONTEXT_USER + (ctxt * RXE_PER_CONTEXT_SIZE))
|
|
|
|
*/
|
|
|
|
binfo.user_regbase = HFI1_MMAP_TOKEN(UREGS, uctxt->ctxt,
|
2017-09-26 21:04:03 +07:00
|
|
|
fd->subctxt, 0);
|
2017-09-26 21:00:56 +07:00
|
|
|
offset = offset_in_page((uctxt_offset(uctxt) + fd->subctxt) *
|
|
|
|
sizeof(*dd->events));
|
2015-07-31 02:17:43 +07:00
|
|
|
binfo.events_bufbase = HFI1_MMAP_TOKEN(EVENTS, uctxt->ctxt,
|
2017-09-26 21:04:03 +07:00
|
|
|
fd->subctxt,
|
|
|
|
offset);
|
2015-07-31 02:17:43 +07:00
|
|
|
binfo.status_bufbase = HFI1_MMAP_TOKEN(STATUS, uctxt->ctxt,
|
2017-09-26 21:04:03 +07:00
|
|
|
fd->subctxt,
|
|
|
|
dd->status);
|
2015-07-31 02:17:43 +07:00
|
|
|
if (HFI1_CAP_IS_USET(DMA_RTAIL))
|
|
|
|
binfo.rcvhdrtail_base = HFI1_MMAP_TOKEN(RTAIL, uctxt->ctxt,
|
2017-09-26 21:04:03 +07:00
|
|
|
fd->subctxt, 0);
|
2015-07-31 02:17:43 +07:00
|
|
|
if (uctxt->subctxt_cnt) {
|
|
|
|
binfo.subctxt_uregbase = HFI1_MMAP_TOKEN(SUBCTXT_UREGS,
|
|
|
|
uctxt->ctxt,
|
2015-10-31 05:58:40 +07:00
|
|
|
fd->subctxt, 0);
|
2017-09-26 21:04:03 +07:00
|
|
|
binfo.subctxt_rcvhdrbuf = HFI1_MMAP_TOKEN(SUBCTXT_RCV_HDRQ,
|
|
|
|
uctxt->ctxt,
|
|
|
|
fd->subctxt, 0);
|
2015-07-31 02:17:43 +07:00
|
|
|
binfo.subctxt_rcvegrbuf = HFI1_MMAP_TOKEN(SUBCTXT_EGRBUF,
|
2017-09-26 21:04:03 +07:00
|
|
|
uctxt->ctxt,
|
|
|
|
fd->subctxt, 0);
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
2017-09-26 21:04:10 +07:00
|
|
|
|
|
|
|
if (copy_to_user((void __user *)arg, &binfo, len))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:16 +07:00
|
|
|
/**
|
|
|
|
* user_exp_rcv_setup - Set up the given tid rcv list
|
|
|
|
* @fd: file data of the current driver instance
|
|
|
|
* @arg: ioctl argumnent for user space information
|
|
|
|
* @len: length of data structure associated with ioctl command
|
|
|
|
*
|
|
|
|
* Wrapper to validate ioctl information before doing _rcv_setup.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static int user_exp_rcv_setup(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
unsigned long addr;
|
|
|
|
struct hfi1_tid_info tinfo;
|
|
|
|
|
|
|
|
if (sizeof(tinfo) != len)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&tinfo, (void __user *)arg, (sizeof(tinfo))))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
ret = hfi1_user_exp_rcv_setup(fd, &tinfo);
|
|
|
|
if (!ret) {
|
|
|
|
/*
|
|
|
|
* Copy the number of tidlist entries we used
|
|
|
|
* and the length of the buffer we registered.
|
|
|
|
*/
|
|
|
|
addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
|
|
|
|
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
|
|
|
|
sizeof(tinfo.tidcnt)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
addr = arg + offsetof(struct hfi1_tid_info, length);
|
|
|
|
if (copy_to_user((void __user *)addr, &tinfo.length,
|
|
|
|
sizeof(tinfo.length)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:22 +07:00
|
|
|
/**
|
|
|
|
* user_exp_rcv_clear - Clear the given tid rcv list
|
|
|
|
* @fd: file data of the current driver instance
|
|
|
|
* @arg: ioctl argumnent for user space information
|
|
|
|
* @len: length of data structure associated with ioctl command
|
|
|
|
*
|
|
|
|
* The hfi1_user_exp_rcv_clear() can be called from the error path. Because
|
|
|
|
* of this, we need to use this wrapper to copy the user space information
|
|
|
|
* before doing the clear.
|
|
|
|
*/
|
|
|
|
static int user_exp_rcv_clear(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
unsigned long addr;
|
|
|
|
struct hfi1_tid_info tinfo;
|
|
|
|
|
|
|
|
if (sizeof(tinfo) != len)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&tinfo, (void __user *)arg, (sizeof(tinfo))))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
ret = hfi1_user_exp_rcv_clear(fd, &tinfo);
|
|
|
|
if (!ret) {
|
|
|
|
addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
|
|
|
|
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
|
|
|
|
sizeof(tinfo.tidcnt)))
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:29 +07:00
|
|
|
/**
|
|
|
|
* user_exp_rcv_invalid - Invalidate the given tid rcv list
|
|
|
|
* @fd: file data of the current driver instance
|
|
|
|
* @arg: ioctl argumnent for user space information
|
|
|
|
* @len: length of data structure associated with ioctl command
|
|
|
|
*
|
|
|
|
* Wrapper to validate ioctl information before doing _rcv_invalid.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static int user_exp_rcv_invalid(struct hfi1_filedata *fd, unsigned long arg,
|
|
|
|
u32 len)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
unsigned long addr;
|
|
|
|
struct hfi1_tid_info tinfo;
|
|
|
|
|
|
|
|
if (sizeof(tinfo) != len)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!fd->invalid_tids)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&tinfo, (void __user *)arg, (sizeof(tinfo))))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
ret = hfi1_user_exp_rcv_invalid(fd, &tinfo);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
|
|
|
|
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
|
|
|
|
sizeof(tinfo.tidcnt)))
|
2015-07-31 02:17:43 +07:00
|
|
|
ret = -EFAULT;
|
2017-09-26 21:04:29 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-07-03 17:39:46 +07:00
|
|
|
static __poll_t poll_urgent(struct file *fp,
|
2015-07-31 02:17:43 +07:00
|
|
|
struct poll_table_struct *pt)
|
|
|
|
{
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_filedata *fd = fp->private_data;
|
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
2017-07-03 17:39:46 +07:00
|
|
|
__poll_t pollflag;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
poll_wait(fp, &uctxt->wait, pt);
|
|
|
|
|
|
|
|
spin_lock_irq(&dd->uctxt_lock);
|
|
|
|
if (uctxt->urgent != uctxt->urgent_poll) {
|
2018-02-12 05:34:03 +07:00
|
|
|
pollflag = EPOLLIN | EPOLLRDNORM;
|
2015-07-31 02:17:43 +07:00
|
|
|
uctxt->urgent_poll = uctxt->urgent;
|
|
|
|
} else {
|
|
|
|
pollflag = 0;
|
|
|
|
set_bit(HFI1_CTXT_WAITING_URG, &uctxt->event_flags);
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dd->uctxt_lock);
|
|
|
|
|
|
|
|
return pollflag;
|
|
|
|
}
|
|
|
|
|
2017-07-03 17:39:46 +07:00
|
|
|
static __poll_t poll_next(struct file *fp,
|
2015-07-31 02:17:43 +07:00
|
|
|
struct poll_table_struct *pt)
|
|
|
|
{
|
2015-10-31 05:58:40 +07:00
|
|
|
struct hfi1_filedata *fd = fp->private_data;
|
|
|
|
struct hfi1_ctxtdata *uctxt = fd->uctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
2017-07-03 17:39:46 +07:00
|
|
|
__poll_t pollflag;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
poll_wait(fp, &uctxt->wait, pt);
|
|
|
|
|
|
|
|
spin_lock_irq(&dd->uctxt_lock);
|
|
|
|
if (hdrqempty(uctxt)) {
|
|
|
|
set_bit(HFI1_CTXT_WAITING_RCV, &uctxt->event_flags);
|
2017-07-24 21:46:06 +07:00
|
|
|
hfi1_rcvctrl(dd, HFI1_RCVCTRL_INTRAVAIL_ENB, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
pollflag = 0;
|
2016-02-15 11:22:00 +07:00
|
|
|
} else {
|
2018-02-12 05:34:03 +07:00
|
|
|
pollflag = EPOLLIN | EPOLLRDNORM;
|
2016-02-15 11:22:00 +07:00
|
|
|
}
|
2015-07-31 02:17:43 +07:00
|
|
|
spin_unlock_irq(&dd->uctxt_lock);
|
|
|
|
|
|
|
|
return pollflag;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find all user contexts in use, and set the specified bit in their
|
|
|
|
* event mask.
|
|
|
|
* See also find_ctxt() for a similar use, that is specific to send buffers.
|
|
|
|
*/
|
|
|
|
int hfi1_set_uevent_bits(struct hfi1_pportdata *ppd, const int evtbit)
|
|
|
|
{
|
|
|
|
struct hfi1_ctxtdata *uctxt;
|
|
|
|
struct hfi1_devdata *dd = ppd->dd;
|
2017-07-24 21:45:55 +07:00
|
|
|
u16 ctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-08-05 03:52:44 +07:00
|
|
|
if (!dd->events)
|
|
|
|
return -EINVAL;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-04-13 10:29:29 +07:00
|
|
|
for (ctxt = dd->first_dyn_alloc_ctxt; ctxt < dd->num_rcv_contexts;
|
2015-07-31 02:17:43 +07:00
|
|
|
ctxt++) {
|
2017-08-05 03:52:44 +07:00
|
|
|
uctxt = hfi1_rcd_get_by_index(dd, ctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
if (uctxt) {
|
2017-09-26 21:00:56 +07:00
|
|
|
unsigned long *evs;
|
2015-07-31 02:17:43 +07:00
|
|
|
int i;
|
|
|
|
/*
|
|
|
|
* subctxt_cnt is 0 if not shared, so do base
|
|
|
|
* separately, first, then remaining subctxt, if any
|
|
|
|
*/
|
2017-09-26 21:00:56 +07:00
|
|
|
evs = dd->events + uctxt_offset(uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
set_bit(evtbit, evs);
|
|
|
|
for (i = 1; i < uctxt->subctxt_cnt; i++)
|
|
|
|
set_bit(evtbit, evs + i);
|
2017-08-05 03:52:44 +07:00
|
|
|
hfi1_rcd_put(uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
}
|
2017-08-05 03:52:44 +07:00
|
|
|
|
|
|
|
return 0;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* manage_rcvq - manage a context's receive queue
|
|
|
|
* @uctxt: the context
|
|
|
|
* @subctxt: the sub-context
|
|
|
|
* @start_stop: action to carry out
|
|
|
|
*
|
|
|
|
* start_stop == 0 disables receive on the context, for use in queue
|
|
|
|
* overflow conditions. start_stop==1 re-enables, to be used to
|
|
|
|
* re-init the software copy of the head register
|
|
|
|
*/
|
2017-05-04 19:15:15 +07:00
|
|
|
static int manage_rcvq(struct hfi1_ctxtdata *uctxt, u16 subctxt,
|
2017-09-26 21:04:35 +07:00
|
|
|
unsigned long arg)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
|
|
|
unsigned int rcvctrl_op;
|
2017-09-26 21:04:35 +07:00
|
|
|
int start_stop;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
if (subctxt)
|
2017-09-26 21:04:35 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (get_user(start_stop, (int __user *)arg))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
/* atomically clear receive enable ctxt. */
|
|
|
|
if (start_stop) {
|
|
|
|
/*
|
|
|
|
* On enable, force in-memory copy of the tail register to
|
|
|
|
* 0, so that protocol code doesn't have to worry about
|
|
|
|
* whether or not the chip has yet updated the in-memory
|
|
|
|
* copy or not on return from the system call. The chip
|
|
|
|
* always resets it's tail register back to 0 on a
|
|
|
|
* transition from disabled to enabled.
|
|
|
|
*/
|
|
|
|
if (uctxt->rcvhdrtail_kvaddr)
|
|
|
|
clear_rcvhdrtail(uctxt);
|
|
|
|
rcvctrl_op = HFI1_RCVCTRL_CTXT_ENB;
|
2016-02-15 11:22:00 +07:00
|
|
|
} else {
|
2015-07-31 02:17:43 +07:00
|
|
|
rcvctrl_op = HFI1_RCVCTRL_CTXT_DIS;
|
2016-02-15 11:22:00 +07:00
|
|
|
}
|
2017-07-24 21:46:06 +07:00
|
|
|
hfi1_rcvctrl(dd, rcvctrl_op, uctxt);
|
2015-07-31 02:17:43 +07:00
|
|
|
/* always; new head should be equal to new tail; see above */
|
2017-09-26 21:04:35 +07:00
|
|
|
|
2015-07-31 02:17:43 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* clear the event notifier events for this context.
|
|
|
|
* User process then performs actions appropriate to bit having been
|
|
|
|
* set, if desired, and checks again in future.
|
|
|
|
*/
|
2017-05-04 19:15:15 +07:00
|
|
|
static int user_event_ack(struct hfi1_ctxtdata *uctxt, u16 subctxt,
|
2017-09-26 21:04:35 +07:00
|
|
|
unsigned long arg)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
|
|
|
unsigned long *evs;
|
2017-09-26 21:04:35 +07:00
|
|
|
unsigned long events;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
if (!dd->events)
|
|
|
|
return 0;
|
|
|
|
|
2017-09-26 21:04:35 +07:00
|
|
|
if (get_user(events, (unsigned long __user *)arg))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2017-09-26 21:00:56 +07:00
|
|
|
evs = dd->events + uctxt_offset(uctxt) + subctxt;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
for (i = 0; i <= _HFI1_MAX_EVENT_BIT; i++) {
|
|
|
|
if (!test_bit(i, &events))
|
|
|
|
continue;
|
|
|
|
clear_bit(i, evs);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:35 +07:00
|
|
|
static int set_ctxt_pkey(struct hfi1_ctxtdata *uctxt, unsigned long arg)
|
2015-07-31 02:17:43 +07:00
|
|
|
{
|
2017-09-26 21:04:35 +07:00
|
|
|
int i;
|
2015-07-31 02:17:43 +07:00
|
|
|
struct hfi1_pportdata *ppd = uctxt->ppd;
|
|
|
|
struct hfi1_devdata *dd = uctxt->dd;
|
2017-09-26 21:04:35 +07:00
|
|
|
u16 pkey;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:04:35 +07:00
|
|
|
if (!HFI1_CAP_IS_USET(PKEY_CHECK))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
if (get_user(pkey, (u16 __user *)arg))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
if (pkey == LIM_MGMT_P_KEY || pkey == FULL_MGMT_P_KEY)
|
|
|
|
return -EINVAL;
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(ppd->pkeys); i++)
|
2017-09-26 21:04:35 +07:00
|
|
|
if (pkey == ppd->pkeys[i])
|
|
|
|
return hfi1_set_ctxt_pkey(dd, uctxt, pkey);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
2017-09-26 21:04:35 +07:00
|
|
|
return -ENOENT;
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
2017-09-26 21:04:42 +07:00
|
|
|
/**
|
|
|
|
* ctxt_reset - Reset the user context
|
|
|
|
* @uctxt: valid user context
|
|
|
|
*/
|
|
|
|
static int ctxt_reset(struct hfi1_ctxtdata *uctxt)
|
|
|
|
{
|
|
|
|
struct send_context *sc;
|
|
|
|
struct hfi1_devdata *dd;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (!uctxt || !uctxt->dd || !uctxt->sc)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* There is no protection here. User level has to guarantee that
|
|
|
|
* no one will be writing to the send context while it is being
|
|
|
|
* re-initialized. If user level breaks that guarantee, it will
|
|
|
|
* break it's own context and no one else's.
|
|
|
|
*/
|
|
|
|
dd = uctxt->dd;
|
|
|
|
sc = uctxt->sc;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait until the interrupt handler has marked the context as
|
|
|
|
* halted or frozen. Report error if we time out.
|
|
|
|
*/
|
|
|
|
wait_event_interruptible_timeout(
|
|
|
|
sc->halt_wait, (sc->flags & SCF_HALTED),
|
|
|
|
msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT));
|
|
|
|
if (!(sc->flags & SCF_HALTED))
|
|
|
|
return -ENOLCK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the send context was halted due to a Freeze, wait until the
|
|
|
|
* device has been "unfrozen" before resetting the context.
|
|
|
|
*/
|
|
|
|
if (sc->flags & SCF_FROZEN) {
|
|
|
|
wait_event_interruptible_timeout(
|
|
|
|
dd->event_queue,
|
|
|
|
!(READ_ONCE(dd->flags) & HFI1_FROZEN),
|
|
|
|
msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT));
|
|
|
|
if (dd->flags & HFI1_FROZEN)
|
|
|
|
return -ENOLCK;
|
|
|
|
|
|
|
|
if (dd->flags & HFI1_FORCED_FREEZE)
|
|
|
|
/*
|
|
|
|
* Don't allow context reset if we are into
|
|
|
|
* forced freeze
|
|
|
|
*/
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
sc_disable(sc);
|
|
|
|
ret = sc_enable(sc);
|
|
|
|
hfi1_rcvctrl(dd, HFI1_RCVCTRL_CTXT_ENB, uctxt);
|
|
|
|
} else {
|
|
|
|
ret = sc_restart(sc);
|
|
|
|
}
|
|
|
|
if (!ret)
|
|
|
|
sc_return_credits(sc);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void user_remove(struct hfi1_devdata *dd)
|
|
|
|
{
|
|
|
|
|
|
|
|
hfi1_cdev_cleanup(&dd->user_cdev, &dd->user_device);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int user_add(struct hfi1_devdata *dd)
|
|
|
|
{
|
|
|
|
char name[10];
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
snprintf(name, sizeof(name), "%s_%d", class_name(), dd->unit);
|
2016-05-19 19:25:50 +07:00
|
|
|
ret = hfi1_cdev_init(dd->unit, name, &hfi1_file_ops,
|
2015-09-18 00:47:49 +07:00
|
|
|
&dd->user_cdev, &dd->user_device,
|
2016-05-19 19:26:44 +07:00
|
|
|
true, &dd->kobj);
|
2015-07-31 02:17:43 +07:00
|
|
|
if (ret)
|
2016-05-19 19:25:57 +07:00
|
|
|
user_remove(dd);
|
2015-07-31 02:17:43 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create per-unit files in /dev
|
|
|
|
*/
|
|
|
|
int hfi1_device_create(struct hfi1_devdata *dd)
|
|
|
|
{
|
2016-05-19 19:26:10 +07:00
|
|
|
return user_add(dd);
|
2015-07-31 02:17:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove per-unit files in /dev
|
|
|
|
* void, core kernel returns no errors for this stuff
|
|
|
|
*/
|
|
|
|
void hfi1_device_remove(struct hfi1_devdata *dd)
|
|
|
|
{
|
|
|
|
user_remove(dd);
|
|
|
|
}
|