2007-05-09 08:00:38 +07:00
|
|
|
/*
|
2008-07-26 00:32:52 +07:00
|
|
|
* Copyright (c) 2005, 2006, 2007, 2008 Mellanox Technologies. All rights reserved.
|
2007-05-09 08:00:38 +07:00
|
|
|
* Copyright (c) 2005, 2006, 2007 Cisco Systems, Inc. All rights reserved.
|
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/interrupt.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2011-05-28 03:14:23 +07:00
|
|
|
#include <linux/export.h>
|
2008-07-24 11:28:13 +07:00
|
|
|
#include <linux/mm.h>
|
2007-05-16 02:36:30 +07:00
|
|
|
#include <linux/dma-mapping.h>
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
#include <linux/mlx4/cmd.h>
|
2012-07-19 05:33:51 +07:00
|
|
|
#include <linux/cpu_rmap.h>
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
#include "mlx4.h"
|
|
|
|
#include "fw.h"
|
|
|
|
|
2009-09-06 10:24:50 +07:00
|
|
|
enum {
|
2011-03-23 05:37:47 +07:00
|
|
|
MLX4_IRQNAME_SIZE = 32
|
2009-09-06 10:24:50 +07:00
|
|
|
};
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
enum {
|
|
|
|
MLX4_NUM_ASYNC_EQE = 0x100,
|
|
|
|
MLX4_NUM_SPARE_EQE = 0x80,
|
|
|
|
MLX4_EQ_ENTRY_SIZE = 0x20
|
|
|
|
};
|
|
|
|
|
|
|
|
#define MLX4_EQ_STATUS_OK ( 0 << 28)
|
|
|
|
#define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28)
|
|
|
|
#define MLX4_EQ_OWNER_SW ( 0 << 24)
|
|
|
|
#define MLX4_EQ_OWNER_HW ( 1 << 24)
|
|
|
|
#define MLX4_EQ_FLAG_EC ( 1 << 18)
|
|
|
|
#define MLX4_EQ_FLAG_OI ( 1 << 17)
|
|
|
|
#define MLX4_EQ_STATE_ARMED ( 9 << 8)
|
|
|
|
#define MLX4_EQ_STATE_FIRED (10 << 8)
|
|
|
|
#define MLX4_EQ_STATE_ALWAYS_ARMED (11 << 8)
|
|
|
|
|
|
|
|
#define MLX4_ASYNC_EVENT_MASK ((1ull << MLX4_EVENT_TYPE_PATH_MIG) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_COMM_EST) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SQ_DRAINED) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_CQ_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_EEC_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_PATH_MIG_FAILED) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_WQ_ACCESS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_PORT_CHANGE) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_ECC_DETECT) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_CATAS_ERROR) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_SRQ_LIMIT) | \
|
2011-12-13 11:13:58 +07:00
|
|
|
(1ull << MLX4_EVENT_TYPE_CMD) | \
|
2013-07-28 22:54:21 +07:00
|
|
|
(1ull << MLX4_EVENT_TYPE_OP_REQUIRED) | \
|
2011-12-13 11:13:58 +07:00
|
|
|
(1ull << MLX4_EVENT_TYPE_COMM_CHANNEL) | \
|
2012-03-06 20:50:49 +07:00
|
|
|
(1ull << MLX4_EVENT_TYPE_FLR_EVENT) | \
|
|
|
|
(1ull << MLX4_EVENT_TYPE_FATAL_WARNING))
|
2007-05-09 08:00:38 +07:00
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
static u64 get_async_ev_mask(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
u64 async_ev_mask = MLX4_ASYNC_EVENT_MASK;
|
|
|
|
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_PORT_MNG_CHG_EV)
|
|
|
|
async_ev_mask |= (1ull << MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT);
|
2015-01-27 20:57:59 +07:00
|
|
|
if (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RECOVERABLE_ERROR_EVENT)
|
|
|
|
async_ev_mask |= (1ull << MLX4_EVENT_TYPE_RECOVERABLE_ERROR_EVENT);
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
|
|
|
|
return async_ev_mask;
|
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
static void eq_set_ci(struct mlx4_eq *eq, int req_not)
|
|
|
|
{
|
|
|
|
__raw_writel((__force u32) cpu_to_be32((eq->cons_index & 0xffffff) |
|
|
|
|
req_not << 31),
|
|
|
|
eq->doorbell);
|
|
|
|
/* We still want ordering, just not swabbing, so add a barrier */
|
|
|
|
mb();
|
|
|
|
}
|
|
|
|
|
2014-09-18 15:51:00 +07:00
|
|
|
static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor,
|
|
|
|
u8 eqe_size)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
2012-10-21 21:59:24 +07:00
|
|
|
/* (entry & (eq->nent - 1)) gives us a cyclic array */
|
2014-09-18 15:51:00 +07:00
|
|
|
unsigned long offset = (entry & (eq->nent - 1)) * eqe_size;
|
|
|
|
/* CX3 is capable of extending the EQE from 32 to 64 bytes with
|
|
|
|
* strides of 64B,128B and 256B.
|
|
|
|
* When 64B EQE is used, the first (in the lower addresses)
|
2012-10-21 21:59:24 +07:00
|
|
|
* 32 bytes in the 64 byte EQE are reserved and the next 32 bytes
|
|
|
|
* contain the legacy EQE information.
|
2014-09-18 15:51:00 +07:00
|
|
|
* In all other cases, the first 32B contains the legacy EQE info.
|
2012-10-21 21:59:24 +07:00
|
|
|
*/
|
|
|
|
return eq->page_list[offset / PAGE_SIZE].buf + (offset + (eqe_factor ? MLX4_EQ_ENTRY_SIZE : 0)) % PAGE_SIZE;
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
2014-09-18 15:51:00 +07:00
|
|
|
static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor, u8 size)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
2014-09-18 15:51:00 +07:00
|
|
|
struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor, size);
|
2007-05-09 08:00:38 +07:00
|
|
|
return !!(eqe->owner & 0x80) ^ !!(eq->cons_index & eq->nent) ? NULL : eqe;
|
|
|
|
}
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
static struct mlx4_eqe *next_slave_event_eqe(struct mlx4_slave_event_eq *slave_eq)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe *eqe =
|
|
|
|
&slave_eq->event_eqe[slave_eq->cons & (SLAVE_EVENT_EQ_SIZE - 1)];
|
|
|
|
return (!!(eqe->owner & 0x80) ^
|
|
|
|
!!(slave_eq->cons & SLAVE_EVENT_EQ_SIZE)) ?
|
|
|
|
eqe : NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_gen_slave_eqe(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct mlx4_mfunc_master_ctx *master =
|
|
|
|
container_of(work, struct mlx4_mfunc_master_ctx,
|
|
|
|
slave_event_work);
|
|
|
|
struct mlx4_mfunc *mfunc =
|
|
|
|
container_of(master, struct mlx4_mfunc, master);
|
|
|
|
struct mlx4_priv *priv = container_of(mfunc, struct mlx4_priv, mfunc);
|
|
|
|
struct mlx4_dev *dev = &priv->dev;
|
|
|
|
struct mlx4_slave_event_eq *slave_eq = &mfunc->master.slave_eq;
|
|
|
|
struct mlx4_eqe *eqe;
|
|
|
|
u8 slave;
|
2015-05-21 19:14:07 +07:00
|
|
|
int i, phys_port, slave_port;
|
2011-12-13 11:13:58 +07:00
|
|
|
|
|
|
|
for (eqe = next_slave_event_eqe(slave_eq); eqe;
|
|
|
|
eqe = next_slave_event_eqe(slave_eq)) {
|
|
|
|
slave = eqe->slave_id;
|
|
|
|
|
2015-12-06 23:07:39 +07:00
|
|
|
if (eqe->type == MLX4_EVENT_TYPE_PORT_CHANGE &&
|
|
|
|
eqe->subtype == MLX4_PORT_CHANGE_SUBTYPE_DOWN &&
|
|
|
|
mlx4_is_bonded(dev)) {
|
|
|
|
struct mlx4_port_cap port_cap;
|
|
|
|
|
|
|
|
if (!mlx4_QUERY_PORT(dev, 1, &port_cap) && port_cap.link_state)
|
|
|
|
goto consume;
|
|
|
|
|
|
|
|
if (!mlx4_QUERY_PORT(dev, 2, &port_cap) && port_cap.link_state)
|
|
|
|
goto consume;
|
|
|
|
}
|
2011-12-13 11:13:58 +07:00
|
|
|
/* All active slaves need to receive the event */
|
|
|
|
if (slave == ALL_SLAVES) {
|
2015-03-24 20:18:39 +07:00
|
|
|
for (i = 0; i <= dev->persist->num_vfs; i++) {
|
2015-05-21 19:14:07 +07:00
|
|
|
phys_port = 0;
|
|
|
|
if (eqe->type == MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT &&
|
|
|
|
eqe->subtype == MLX4_DEV_PMC_SUBTYPE_PORT_INFO) {
|
|
|
|
phys_port = eqe->event.port_mgmt_change.port;
|
|
|
|
slave_port = mlx4_phys_to_slave_port(dev, i, phys_port);
|
|
|
|
if (slave_port < 0) /* VF doesn't have this port */
|
|
|
|
continue;
|
|
|
|
eqe->event.port_mgmt_change.port = slave_port;
|
|
|
|
}
|
2015-03-24 20:18:39 +07:00
|
|
|
if (mlx4_GEN_EQE(dev, i, eqe))
|
|
|
|
mlx4_warn(dev, "Failed to generate event for slave %d\n",
|
|
|
|
i);
|
2015-05-21 19:14:07 +07:00
|
|
|
if (phys_port)
|
|
|
|
eqe->event.port_mgmt_change.port = phys_port;
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (mlx4_GEN_EQE(dev, slave, eqe))
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Failed to generate event for slave %d\n",
|
|
|
|
slave);
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
2015-12-06 23:07:39 +07:00
|
|
|
consume:
|
2011-12-13 11:13:58 +07:00
|
|
|
++slave_eq->cons;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void slave_event(struct mlx4_dev *dev, u8 slave, struct mlx4_eqe *eqe)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_event_eq *slave_eq = &priv->mfunc.master.slave_eq;
|
2012-08-03 15:40:54 +07:00
|
|
|
struct mlx4_eqe *s_eqe;
|
|
|
|
unsigned long flags;
|
2011-12-13 11:13:58 +07:00
|
|
|
|
2012-08-03 15:40:54 +07:00
|
|
|
spin_lock_irqsave(&slave_eq->event_lock, flags);
|
|
|
|
s_eqe = &slave_eq->event_eqe[slave_eq->prod & (SLAVE_EVENT_EQ_SIZE - 1)];
|
2011-12-13 11:13:58 +07:00
|
|
|
if ((!!(s_eqe->owner & 0x80)) ^
|
|
|
|
(!!(slave_eq->prod & SLAVE_EVENT_EQ_SIZE))) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Master failed to generate an EQE for slave: %d. No free EQE on slave events queue\n",
|
|
|
|
slave);
|
2012-08-03 15:40:54 +07:00
|
|
|
spin_unlock_irqrestore(&slave_eq->event_lock, flags);
|
2011-12-13 11:13:58 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-10-27 22:36:20 +07:00
|
|
|
memcpy(s_eqe, eqe, sizeof(struct mlx4_eqe) - 1);
|
2011-12-13 11:13:58 +07:00
|
|
|
s_eqe->slave_id = slave;
|
|
|
|
/* ensure all information is written before setting the ownersip bit */
|
2015-04-09 08:49:36 +07:00
|
|
|
dma_wmb();
|
2011-12-13 11:13:58 +07:00
|
|
|
s_eqe->owner = !!(slave_eq->prod & SLAVE_EVENT_EQ_SIZE) ? 0x0 : 0x80;
|
|
|
|
++slave_eq->prod;
|
|
|
|
|
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.slave_event_work);
|
2012-08-03 15:40:54 +07:00
|
|
|
spin_unlock_irqrestore(&slave_eq->event_lock, flags);
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_slave_event(struct mlx4_dev *dev, int slave,
|
|
|
|
struct mlx4_eqe *eqe)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
2015-03-24 20:18:39 +07:00
|
|
|
if (slave < 0 || slave > dev->persist->num_vfs ||
|
|
|
|
slave == dev->caps.function ||
|
|
|
|
!priv->mfunc.master.slave_state[slave].active)
|
2011-12-13 11:13:58 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
slave_event(dev, slave, eqe);
|
|
|
|
}
|
|
|
|
|
2015-06-02 14:29:48 +07:00
|
|
|
#if defined(CONFIG_SMP)
|
2015-05-31 13:30:17 +07:00
|
|
|
static void mlx4_set_eq_affinity_hint(struct mlx4_priv *priv, int vec)
|
|
|
|
{
|
|
|
|
int hint_err;
|
|
|
|
struct mlx4_dev *dev = &priv->dev;
|
|
|
|
struct mlx4_eq *eq = &priv->eq_table.eq[vec];
|
|
|
|
|
|
|
|
if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask))
|
|
|
|
return;
|
|
|
|
|
|
|
|
hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask);
|
|
|
|
if (hint_err)
|
|
|
|
mlx4_warn(dev, "irq_set_affinity_hint failed, err %d\n", hint_err);
|
|
|
|
}
|
2015-06-02 14:29:48 +07:00
|
|
|
#endif
|
2015-05-31 13:30:17 +07:00
|
|
|
|
2012-08-03 15:40:48 +07:00
|
|
|
int mlx4_gen_pkey_eqe(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_slave = &priv->mfunc.master.slave_state[slave];
|
|
|
|
|
|
|
|
if (!s_slave->active)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_PKEY_TABLE;
|
2015-05-21 19:14:07 +07:00
|
|
|
eqe.event.port_mgmt_change.port = mlx4_phys_to_slave_port(dev, slave, port);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_pkey_eqe);
|
|
|
|
|
|
|
|
int mlx4_gen_guid_change_eqe(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
/*don't send if we don't have the that slave */
|
2015-01-25 21:59:35 +07:00
|
|
|
if (dev->persist->num_vfs < slave)
|
2012-08-03 15:40:48 +07:00
|
|
|
return 0;
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_GUID_INFO;
|
2015-05-21 19:14:07 +07:00
|
|
|
eqe.event.port_mgmt_change.port = mlx4_phys_to_slave_port(dev, slave, port);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_guid_change_eqe);
|
|
|
|
|
|
|
|
int mlx4_gen_port_state_change_eqe(struct mlx4_dev *dev, int slave, u8 port,
|
|
|
|
u8 port_subtype_change)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
2015-05-21 19:14:07 +07:00
|
|
|
u8 slave_port = mlx4_phys_to_slave_port(dev, slave, port);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
|
|
|
/*don't send if we don't have the that slave */
|
2015-01-25 21:59:35 +07:00
|
|
|
if (dev->persist->num_vfs < slave)
|
2012-08-03 15:40:48 +07:00
|
|
|
return 0;
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_CHANGE;
|
|
|
|
eqe.subtype = port_subtype_change;
|
2015-05-21 19:14:07 +07:00
|
|
|
eqe.event.port_change.port = cpu_to_be32(slave_port << 28);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
|
|
|
mlx4_dbg(dev, "%s: sending: %d to slave: %d on port: %d\n", __func__,
|
|
|
|
port_subtype_change, slave, port);
|
|
|
|
return mlx4_GEN_EQE(dev, slave, &eqe);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_port_state_change_eqe);
|
|
|
|
|
|
|
|
enum slave_port_state mlx4_get_slave_port_state(struct mlx4_dev *dev, int slave, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_state = priv->mfunc.master.slave_state;
|
2014-03-19 23:11:52 +07:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
|
|
|
|
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 15:40:48 +07:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return SLAVE_PORT_DOWN;
|
|
|
|
}
|
|
|
|
return s_state[slave].port_state[port];
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_get_slave_port_state);
|
|
|
|
|
|
|
|
static int mlx4_set_slave_port_state(struct mlx4_dev *dev, int slave, u8 port,
|
|
|
|
enum slave_port_state state)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *s_state = priv->mfunc.master.slave_state;
|
2014-03-19 23:11:52 +07:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
2014-03-19 23:11:52 +07:00
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 15:40:48 +07:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
s_state[slave].port_state[port] = state;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_all_slave_state(struct mlx4_dev *dev, u8 port, int event)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
enum slave_port_gen_event gen_event;
|
2014-03-19 23:11:52 +07:00
|
|
|
struct mlx4_slaves_pport slaves_pport = mlx4_phys_to_slaves_pport(dev,
|
|
|
|
port);
|
2012-08-03 15:40:48 +07:00
|
|
|
|
2015-01-25 21:59:35 +07:00
|
|
|
for (i = 0; i < dev->persist->num_vfs + 1; i++)
|
2014-03-19 23:11:52 +07:00
|
|
|
if (test_bit(i, slaves_pport.slaves))
|
|
|
|
set_and_calc_slave_port_state(dev, i, port,
|
|
|
|
event, &gen_event);
|
2012-08-03 15:40:48 +07:00
|
|
|
}
|
|
|
|
/**************************************************************************
|
|
|
|
The function get as input the new event to that port,
|
|
|
|
and according to the prev state change the slave's port state.
|
|
|
|
The events are:
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN,
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_UP
|
|
|
|
MLX4_PORT_STATE_IB_EVENT_GID_VALID
|
|
|
|
MLX4_PORT_STATE_IB_EVENT_GID_INVALID
|
|
|
|
***************************************************************************/
|
|
|
|
int set_and_calc_slave_port_state(struct mlx4_dev *dev, int slave,
|
|
|
|
u8 port, int event,
|
|
|
|
enum slave_port_gen_event *gen_event)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_state *ctx = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
int ret = -1;
|
2014-03-19 23:11:52 +07:00
|
|
|
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
|
2012-08-03 15:40:48 +07:00
|
|
|
enum slave_port_state cur_state =
|
|
|
|
mlx4_get_slave_port_state(dev, slave, port);
|
|
|
|
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_NONE;
|
|
|
|
|
2014-03-19 23:11:52 +07:00
|
|
|
if (slave >= dev->num_slaves || port > dev->caps.num_ports ||
|
|
|
|
port <= 0 || !test_bit(port - 1, actv_ports.ports)) {
|
2012-08-03 15:40:48 +07:00
|
|
|
pr_err("%s: Error: asking for slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx = &priv->mfunc.master.slave_state[slave];
|
|
|
|
spin_lock_irqsave(&ctx->lock, flags);
|
|
|
|
|
|
|
|
switch (cur_state) {
|
|
|
|
case SLAVE_PORT_DOWN:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_UP == event)
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PENDING_UP);
|
|
|
|
break;
|
|
|
|
case SLAVE_PENDING_UP:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN == event)
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_DOWN);
|
|
|
|
else if (MLX4_PORT_STATE_IB_PORT_STATE_EVENT_GID_VALID == event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_UP);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_UP;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case SLAVE_PORT_UP:
|
|
|
|
if (MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN == event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PORT_DOWN);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_DOWN;
|
|
|
|
} else if (MLX4_PORT_STATE_IB_EVENT_GID_INVALID ==
|
|
|
|
event) {
|
|
|
|
mlx4_set_slave_port_state(dev, slave, port,
|
|
|
|
SLAVE_PENDING_UP);
|
|
|
|
*gen_event = SLAVE_PORT_GEN_EVENT_DOWN;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2014-05-08 02:52:57 +07:00
|
|
|
pr_err("%s: BUG!!! UNKNOWN state: slave:%d, port:%d\n",
|
|
|
|
__func__, slave, port);
|
|
|
|
goto out;
|
2012-08-03 15:40:48 +07:00
|
|
|
}
|
|
|
|
ret = mlx4_get_slave_port_state(dev, slave, port);
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&ctx->lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(set_and_calc_slave_port_state);
|
|
|
|
|
|
|
|
int mlx4_gen_slaves_port_mgt_ev(struct mlx4_dev *dev, u8 port, int attr)
|
|
|
|
{
|
|
|
|
struct mlx4_eqe eqe;
|
|
|
|
|
|
|
|
memset(&eqe, 0, sizeof eqe);
|
|
|
|
|
|
|
|
eqe.type = MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT;
|
|
|
|
eqe.subtype = MLX4_DEV_PMC_SUBTYPE_PORT_INFO;
|
|
|
|
eqe.event.port_mgmt_change.port = port;
|
|
|
|
eqe.event.port_mgmt_change.params.port_info.changed_attr =
|
|
|
|
cpu_to_be32((u32) attr);
|
|
|
|
|
|
|
|
slave_event(dev, ALL_SLAVES, &eqe);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_gen_slaves_port_mgt_ev);
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
void mlx4_master_handle_slave_flr(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct mlx4_mfunc_master_ctx *master =
|
|
|
|
container_of(work, struct mlx4_mfunc_master_ctx,
|
|
|
|
slave_flr_event_work);
|
|
|
|
struct mlx4_mfunc *mfunc =
|
|
|
|
container_of(master, struct mlx4_mfunc, master);
|
|
|
|
struct mlx4_priv *priv =
|
|
|
|
container_of(mfunc, struct mlx4_priv, mfunc);
|
|
|
|
struct mlx4_dev *dev = &priv->dev;
|
|
|
|
struct mlx4_slave_state *slave_state = priv->mfunc.master.slave_state;
|
|
|
|
int i;
|
|
|
|
int err;
|
2012-11-27 23:24:30 +07:00
|
|
|
unsigned long flags;
|
2011-12-13 11:13:58 +07:00
|
|
|
|
|
|
|
mlx4_dbg(dev, "mlx4_handle_slave_flr\n");
|
|
|
|
|
|
|
|
for (i = 0 ; i < dev->num_slaves; i++) {
|
|
|
|
|
|
|
|
if (MLX4_COMM_CMD_FLR == slave_state[i].last_cmd) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_dbg(dev, "mlx4_handle_slave_flr: clean slave: %d\n",
|
|
|
|
i);
|
2015-01-25 21:59:42 +07:00
|
|
|
/* In case of 'Reset flow' FLR can be generated for
|
|
|
|
* a slave before mlx4_load_one is done.
|
|
|
|
* make sure interface is up before trying to delete
|
|
|
|
* slave resources which weren't allocated yet.
|
|
|
|
*/
|
|
|
|
if (dev->persist->interface_state &
|
|
|
|
MLX4_INTERFACE_STATE_UP)
|
|
|
|
mlx4_delete_all_resources_for_slave(dev, i);
|
2011-12-13 11:13:58 +07:00
|
|
|
/*return the slave to running mode*/
|
2012-11-27 23:24:30 +07:00
|
|
|
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 11:13:58 +07:00
|
|
|
slave_state[i].last_cmd = MLX4_COMM_CMD_RESET;
|
|
|
|
slave_state[i].is_slave_going_down = 0;
|
2012-11-27 23:24:30 +07:00
|
|
|
spin_unlock_irqrestore(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 11:13:58 +07:00
|
|
|
/*notify the FW:*/
|
|
|
|
err = mlx4_cmd(dev, 0, i, 0, MLX4_CMD_INFORM_FLR_DONE,
|
|
|
|
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
|
|
|
|
if (err)
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Failed to notify FW on FLR done (slave:%d)\n",
|
|
|
|
i);
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
static int mlx4_eq_int(struct mlx4_dev *dev, struct mlx4_eq *eq)
|
|
|
|
{
|
2011-12-13 11:13:58 +07:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
struct mlx4_eqe *eqe;
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:53 +07:00
|
|
|
int cqn = -1;
|
2007-05-09 08:00:38 +07:00
|
|
|
int eqes_found = 0;
|
|
|
|
int set_ci = 0;
|
2009-03-19 09:45:11 +07:00
|
|
|
int port;
|
2011-12-13 11:13:58 +07:00
|
|
|
int slave = 0;
|
|
|
|
int ret;
|
|
|
|
u32 flr_slave;
|
|
|
|
u8 update_slave_state;
|
|
|
|
int i;
|
2012-08-03 15:40:48 +07:00
|
|
|
enum slave_port_gen_event gen_event;
|
2012-11-27 23:24:30 +07:00
|
|
|
unsigned long flags;
|
2013-06-13 17:19:11 +07:00
|
|
|
struct mlx4_vport_state *s_info;
|
2014-09-18 15:51:00 +07:00
|
|
|
int eqe_size = dev->caps.eqe_size;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2014-09-18 15:51:00 +07:00
|
|
|
while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor, eqe_size))) {
|
2007-05-09 08:00:38 +07:00
|
|
|
/*
|
|
|
|
* Make sure we read EQ entry contents after we've
|
|
|
|
* checked the ownership bit.
|
|
|
|
*/
|
2015-04-09 08:49:36 +07:00
|
|
|
dma_rmb();
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
switch (eqe->type) {
|
|
|
|
case MLX4_EVENT_TYPE_COMP:
|
|
|
|
cqn = be32_to_cpu(eqe->event.comp.cqn) & 0xffffff;
|
|
|
|
mlx4_cq_completion(dev, cqn);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG:
|
|
|
|
case MLX4_EVENT_TYPE_COMM_EST:
|
|
|
|
case MLX4_EVENT_TYPE_SQ_DRAINED:
|
|
|
|
case MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_CATAS_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG_FAILED:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_WQ_ACCESS_ERROR:
|
2011-12-13 11:13:58 +07:00
|
|
|
mlx4_dbg(dev, "event %d arrived\n", eqe->type);
|
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
/* forward only to slave owning the QP */
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_QP,
|
|
|
|
be32_to_cpu(eqe->event.qp.qpn)
|
|
|
|
& 0xffffff, &slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_dbg(dev, "QP event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
2011-12-13 11:13:58 +07:00
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
mlx4_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) &
|
|
|
|
0xffffff, eqe->type);
|
2007-05-09 08:00:38 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_SRQ_LIMIT:
|
2013-04-21 22:09:59 +07:00
|
|
|
mlx4_dbg(dev, "%s: MLX4_EVENT_TYPE_SRQ_LIMIT\n",
|
|
|
|
__func__);
|
2007-05-09 08:00:38 +07:00
|
|
|
case MLX4_EVENT_TYPE_SRQ_CATAS_ERROR:
|
2011-12-13 11:13:58 +07:00
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
/* forward only to slave owning the SRQ */
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_SRQ,
|
|
|
|
be32_to_cpu(eqe->event.srq.srqn)
|
|
|
|
& 0xffffff,
|
|
|
|
&slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "SRQ event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
2011-12-13 11:13:58 +07:00
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
|
|
|
break;
|
|
|
|
}
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "%s: slave:%d, srq_no:0x%x, event: %02x(%02x)\n",
|
|
|
|
__func__, slave,
|
2011-12-13 11:13:58 +07:00
|
|
|
be32_to_cpu(eqe->event.srq.srqn),
|
|
|
|
eqe->type, eqe->subtype);
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "%s: sending event %02x(%02x) to slave:%d\n",
|
|
|
|
__func__, eqe->type,
|
2011-12-13 11:13:58 +07:00
|
|
|
eqe->subtype, slave);
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mlx4_srq_event(dev, be32_to_cpu(eqe->event.srq.srqn) &
|
|
|
|
0xffffff, eqe->type);
|
2007-05-09 08:00:38 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_CMD:
|
|
|
|
mlx4_cmd_event(dev,
|
|
|
|
be16_to_cpu(eqe->event.cmd.token),
|
|
|
|
eqe->event.cmd.status,
|
|
|
|
be64_to_cpu(eqe->event.cmd.out_param));
|
|
|
|
break;
|
|
|
|
|
2014-03-19 23:11:52 +07:00
|
|
|
case MLX4_EVENT_TYPE_PORT_CHANGE: {
|
|
|
|
struct mlx4_slaves_pport slaves_port;
|
2009-03-19 09:45:11 +07:00
|
|
|
port = be32_to_cpu(eqe->event.port_change.port) >> 28;
|
2014-03-19 23:11:52 +07:00
|
|
|
slaves_port = mlx4_phys_to_slaves_pport(dev, port);
|
2009-03-19 09:45:11 +07:00
|
|
|
if (eqe->subtype == MLX4_PORT_CHANGE_SUBTYPE_DOWN) {
|
2012-08-03 15:40:48 +07:00
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_DOWN,
|
2009-03-19 09:45:11 +07:00
|
|
|
port);
|
|
|
|
mlx4_priv(dev)->sense.do_sense_port[port] = 1;
|
2012-08-03 15:40:48 +07:00
|
|
|
if (!mlx4_is_master(dev))
|
|
|
|
break;
|
2015-01-25 21:59:35 +07:00
|
|
|
for (i = 0; i < dev->persist->num_vfs + 1;
|
|
|
|
i++) {
|
2015-12-06 23:07:39 +07:00
|
|
|
int reported_port = mlx4_is_bonded(dev) ? 1 : mlx4_phys_to_slave_port(dev, i, port);
|
|
|
|
|
|
|
|
if (!test_bit(i, slaves_port.slaves) && !mlx4_is_bonded(dev))
|
2014-03-19 23:11:52 +07:00
|
|
|
continue;
|
2012-08-03 15:40:48 +07:00
|
|
|
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH) {
|
|
|
|
if (i == mlx4_master_func_num(dev))
|
|
|
|
continue;
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_dbg(dev, "%s: Sending MLX4_PORT_CHANGE_SUBTYPE_DOWN to slave: %d, port:%d\n",
|
2011-12-13 11:13:58 +07:00
|
|
|
__func__, i, port);
|
2015-07-22 20:53:47 +07:00
|
|
|
s_info = &priv->mfunc.master.vf_oper[i].vport[port].state;
|
2014-03-19 23:11:52 +07:00
|
|
|
if (IFLA_VF_LINK_STATE_AUTO == s_info->link_state) {
|
|
|
|
eqe->event.port_change.port =
|
|
|
|
cpu_to_be32(
|
|
|
|
(be32_to_cpu(eqe->event.port_change.port) & 0xFFFFFFF)
|
2015-12-06 23:07:39 +07:00
|
|
|
| (reported_port << 28));
|
2013-06-13 17:19:11 +07:00
|
|
|
mlx4_slave_event(dev, i, eqe);
|
2014-03-19 23:11:52 +07:00
|
|
|
}
|
2012-08-03 15:40:48 +07:00
|
|
|
} else { /* IB port */
|
|
|
|
set_and_calc_slave_port_state(dev, i, port,
|
|
|
|
MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN,
|
|
|
|
&gen_event);
|
|
|
|
/*we can be in pending state, then do not send port_down event*/
|
|
|
|
if (SLAVE_PORT_GEN_EVENT_DOWN == gen_event) {
|
|
|
|
if (i == mlx4_master_func_num(dev))
|
|
|
|
continue;
|
2015-05-21 19:14:07 +07:00
|
|
|
eqe->event.port_change.port =
|
|
|
|
cpu_to_be32(
|
|
|
|
(be32_to_cpu(eqe->event.port_change.port) & 0xFFFFFFF)
|
|
|
|
| (mlx4_phys_to_slave_port(dev, i, port) << 28));
|
2012-08-03 15:40:48 +07:00
|
|
|
mlx4_slave_event(dev, i, eqe);
|
|
|
|
}
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
2012-08-03 15:40:48 +07:00
|
|
|
}
|
2009-03-19 09:45:11 +07:00
|
|
|
} else {
|
2012-08-03 15:40:48 +07:00
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_UP, port);
|
|
|
|
|
2009-03-19 09:45:11 +07:00
|
|
|
mlx4_priv(dev)->sense.do_sense_port[port] = 0;
|
2011-12-13 11:13:58 +07:00
|
|
|
|
2012-08-03 15:40:48 +07:00
|
|
|
if (!mlx4_is_master(dev))
|
|
|
|
break;
|
|
|
|
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH)
|
2015-01-25 21:59:35 +07:00
|
|
|
for (i = 0;
|
|
|
|
i < dev->persist->num_vfs + 1;
|
|
|
|
i++) {
|
2015-12-06 23:07:39 +07:00
|
|
|
int reported_port = mlx4_is_bonded(dev) ? 1 : mlx4_phys_to_slave_port(dev, i, port);
|
|
|
|
|
|
|
|
if (!test_bit(i, slaves_port.slaves) && !mlx4_is_bonded(dev))
|
2014-03-19 23:11:52 +07:00
|
|
|
continue;
|
2012-08-03 15:40:48 +07:00
|
|
|
if (i == mlx4_master_func_num(dev))
|
2011-12-13 11:13:58 +07:00
|
|
|
continue;
|
2015-07-22 20:53:47 +07:00
|
|
|
s_info = &priv->mfunc.master.vf_oper[i].vport[port].state;
|
2014-03-19 23:11:52 +07:00
|
|
|
if (IFLA_VF_LINK_STATE_AUTO == s_info->link_state) {
|
|
|
|
eqe->event.port_change.port =
|
|
|
|
cpu_to_be32(
|
|
|
|
(be32_to_cpu(eqe->event.port_change.port) & 0xFFFFFFF)
|
2015-12-06 23:07:39 +07:00
|
|
|
| (reported_port << 28));
|
2013-06-13 17:19:11 +07:00
|
|
|
mlx4_slave_event(dev, i, eqe);
|
2014-03-19 23:11:52 +07:00
|
|
|
}
|
2011-12-13 11:13:58 +07:00
|
|
|
}
|
2012-08-03 15:40:48 +07:00
|
|
|
else /* IB port */
|
|
|
|
/* port-up event will be sent to a slave when the
|
|
|
|
* slave's alias-guid is set. This is done in alias_GUID.c
|
|
|
|
*/
|
|
|
|
set_all_slave_state(dev, port, MLX4_DEV_EVENT_PORT_UP);
|
2009-03-19 09:45:11 +07:00
|
|
|
}
|
2007-05-09 08:00:38 +07:00
|
|
|
break;
|
2014-03-19 23:11:52 +07:00
|
|
|
}
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_CQ_ERROR:
|
|
|
|
mlx4_warn(dev, "CQ %s on CQN %06x\n",
|
|
|
|
eqe->event.cq_err.syndrome == 1 ?
|
|
|
|
"overrun" : "access violation",
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn) & 0xffffff);
|
2011-12-13 11:13:58 +07:00
|
|
|
if (mlx4_is_master(dev)) {
|
|
|
|
ret = mlx4_get_slave_from_resource_id(dev,
|
|
|
|
RES_CQ,
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn)
|
|
|
|
& 0xffffff, &slave);
|
|
|
|
if (ret && ret != -ENOENT) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_dbg(dev, "CQ event %02x(%02x) on EQ %d at index %u: could not get slave id (%d)\n",
|
|
|
|
eqe->type, eqe->subtype,
|
|
|
|
eq->eqn, eq->cons_index, ret);
|
2011-12-13 11:13:58 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret && slave != dev->caps.function) {
|
|
|
|
mlx4_slave_event(dev, slave, eqe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mlx4_cq_event(dev,
|
|
|
|
be32_to_cpu(eqe->event.cq_err.cqn)
|
|
|
|
& 0xffffff,
|
2007-05-09 08:00:38 +07:00
|
|
|
eqe->type);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_EQ_OVERFLOW:
|
|
|
|
mlx4_warn(dev, "EQ overrun on EQN %d\n", eq->eqn);
|
|
|
|
break;
|
|
|
|
|
2013-07-28 22:54:21 +07:00
|
|
|
case MLX4_EVENT_TYPE_OP_REQUIRED:
|
|
|
|
atomic_inc(&priv->opreq_count);
|
|
|
|
/* FW commands can't be executed from interrupt context
|
|
|
|
* working in deferred task
|
|
|
|
*/
|
|
|
|
queue_work(mlx4_wq, &priv->opreq_task);
|
|
|
|
break;
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
case MLX4_EVENT_TYPE_COMM_CHANNEL:
|
|
|
|
if (!mlx4_is_master(dev)) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Received comm channel event for non master device\n");
|
2011-12-13 11:13:58 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
memcpy(&priv->mfunc.master.comm_arm_bit_vector,
|
|
|
|
eqe->event.comm_channel_arm.bit_vec,
|
|
|
|
sizeof eqe->event.comm_channel_arm.bit_vec);
|
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.comm_work);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_FLR_EVENT:
|
|
|
|
flr_slave = be32_to_cpu(eqe->event.flr_event.slave_id);
|
|
|
|
if (!mlx4_is_master(dev)) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Non-master function received FLR event\n");
|
2011-12-13 11:13:58 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
mlx4_dbg(dev, "FLR event for slave: %d\n", flr_slave);
|
|
|
|
|
2012-05-30 16:14:50 +07:00
|
|
|
if (flr_slave >= dev->num_slaves) {
|
2011-12-13 11:13:58 +07:00
|
|
|
mlx4_warn(dev,
|
|
|
|
"Got FLR for unknown function: %d\n",
|
|
|
|
flr_slave);
|
|
|
|
update_slave_state = 0;
|
|
|
|
} else
|
|
|
|
update_slave_state = 1;
|
|
|
|
|
2012-11-27 23:24:30 +07:00
|
|
|
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
|
2011-12-13 11:13:58 +07:00
|
|
|
if (update_slave_state) {
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].active = false;
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].last_cmd = MLX4_COMM_CMD_FLR;
|
|
|
|
priv->mfunc.master.slave_state[flr_slave].is_slave_going_down = 1;
|
|
|
|
}
|
2012-11-27 23:24:30 +07:00
|
|
|
spin_unlock_irqrestore(&priv->mfunc.master.slave_state_lock, flags);
|
2015-03-18 19:56:04 +07:00
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_SLAVE_SHUTDOWN,
|
|
|
|
flr_slave);
|
2011-12-13 11:13:58 +07:00
|
|
|
queue_work(priv->mfunc.master.comm_wq,
|
|
|
|
&priv->mfunc.master.slave_flr_event_work);
|
|
|
|
break;
|
2012-03-06 20:50:49 +07:00
|
|
|
|
|
|
|
case MLX4_EVENT_TYPE_FATAL_WARNING:
|
|
|
|
if (eqe->subtype == MLX4_FATAL_WARNING_SUBTYPE_WARMING) {
|
|
|
|
if (mlx4_is_master(dev))
|
|
|
|
for (i = 0; i < dev->num_slaves; i++) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_dbg(dev, "%s: Sending MLX4_FATAL_WARNING_SUBTYPE_WARMING to slave: %d\n",
|
|
|
|
__func__, i);
|
2012-03-06 20:50:49 +07:00
|
|
|
if (i == dev->caps.function)
|
|
|
|
continue;
|
|
|
|
mlx4_slave_event(dev, i, eqe);
|
|
|
|
}
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_err(dev, "Temperature Threshold was reached! Threshold: %d celsius degrees; Current Temperature: %d\n",
|
|
|
|
be16_to_cpu(eqe->event.warming.warning_threshold),
|
|
|
|
be16_to_cpu(eqe->event.warming.current_temperature));
|
2012-03-06 20:50:49 +07:00
|
|
|
} else
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Unhandled event FATAL WARNING (%02x), subtype %02x on EQ %d at index %u. owner=%x, nent=0x%x, slave=%x, ownership=%s\n",
|
2012-03-06 20:50:49 +07:00
|
|
|
eqe->type, eqe->subtype, eq->eqn,
|
|
|
|
eq->cons_index, eqe->owner, eq->nent,
|
|
|
|
eqe->slave_id,
|
|
|
|
!!(eqe->owner & 0x80) ^
|
|
|
|
!!(eq->cons_index & eq->nent) ? "HW" : "SW");
|
|
|
|
|
|
|
|
break;
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
case MLX4_EVENT_TYPE_PORT_MNG_CHG_EVENT:
|
|
|
|
mlx4_dispatch_event(dev, MLX4_DEV_EVENT_PORT_MGMT_CHANGE,
|
|
|
|
(unsigned long) eqe);
|
|
|
|
break;
|
|
|
|
|
2015-01-27 20:57:59 +07:00
|
|
|
case MLX4_EVENT_TYPE_RECOVERABLE_ERROR_EVENT:
|
|
|
|
switch (eqe->subtype) {
|
|
|
|
case MLX4_RECOVERABLE_ERROR_EVENT_SUBTYPE_BAD_CABLE:
|
|
|
|
mlx4_warn(dev, "Bad cable detected on port %u\n",
|
|
|
|
eqe->event.bad_cable.port);
|
|
|
|
break;
|
|
|
|
case MLX4_RECOVERABLE_ERROR_EVENT_SUBTYPE_UNSUPPORTED_CABLE:
|
|
|
|
mlx4_warn(dev, "Unsupported cable detected\n");
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
mlx4_dbg(dev,
|
|
|
|
"Unhandled recoverable error event detected: %02x(%02x) on EQ %d at index %u. owner=%x, nent=0x%x, ownership=%s\n",
|
|
|
|
eqe->type, eqe->subtype, eq->eqn,
|
|
|
|
eq->cons_index, eqe->owner, eq->nent,
|
|
|
|
!!(eqe->owner & 0x80) ^
|
|
|
|
!!(eq->cons_index & eq->nent) ? "HW" : "SW");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
case MLX4_EVENT_TYPE_EEC_CATAS_ERROR:
|
|
|
|
case MLX4_EVENT_TYPE_ECC_DETECT:
|
|
|
|
default:
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Unhandled event %02x(%02x) on EQ %d at index %u. owner=%x, nent=0x%x, slave=%x, ownership=%s\n",
|
2011-12-13 11:13:58 +07:00
|
|
|
eqe->type, eqe->subtype, eq->eqn,
|
|
|
|
eq->cons_index, eqe->owner, eq->nent,
|
|
|
|
eqe->slave_id,
|
|
|
|
!!(eqe->owner & 0x80) ^
|
|
|
|
!!(eq->cons_index & eq->nent) ? "HW" : "SW");
|
2007-05-09 08:00:38 +07:00
|
|
|
break;
|
2011-12-13 11:13:58 +07:00
|
|
|
};
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
++eq->cons_index;
|
|
|
|
eqes_found = 1;
|
|
|
|
++set_ci;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The HCA will think the queue has overflowed if we
|
|
|
|
* don't tell it we've been processing events. We
|
|
|
|
* create our EQs with MLX4_NUM_SPARE_EQE extra
|
|
|
|
* entries, so we must update our consumer index at
|
|
|
|
* least that often.
|
|
|
|
*/
|
|
|
|
if (unlikely(set_ci >= MLX4_NUM_SPARE_EQE)) {
|
|
|
|
eq_set_ci(eq, 0);
|
|
|
|
set_ci = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
eq_set_ci(eq, 1);
|
|
|
|
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:53 +07:00
|
|
|
/* cqn is 24bit wide but is initialized such that its higher bits
|
|
|
|
* are ones too. Thus, if we got any event, cqn's high bits should be off
|
|
|
|
* and we need to schedule the tasklet.
|
|
|
|
*/
|
|
|
|
if (!(cqn & ~0xffffff))
|
|
|
|
tasklet_schedule(&eq->tasklet_ctx.task);
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
return eqes_found;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t mlx4_interrupt(int irq, void *dev_ptr)
|
|
|
|
{
|
|
|
|
struct mlx4_dev *dev = dev_ptr;
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int work = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
writel(priv->eq_table.clr_mask, priv->eq_table.clr_int);
|
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2007-05-09 08:00:38 +07:00
|
|
|
work |= mlx4_eq_int(dev, &priv->eq_table.eq[i]);
|
|
|
|
|
|
|
|
return IRQ_RETVAL(work);
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t mlx4_msi_x_interrupt(int irq, void *eq_ptr)
|
|
|
|
{
|
|
|
|
struct mlx4_eq *eq = eq_ptr;
|
|
|
|
struct mlx4_dev *dev = eq->dev;
|
|
|
|
|
|
|
|
mlx4_eq_int(dev, eq);
|
|
|
|
|
|
|
|
/* MSI-X vectors always belong to us */
|
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
int mlx4_MAP_EQ_wrapper(struct mlx4_dev *dev, int slave,
|
|
|
|
struct mlx4_vhcr *vhcr,
|
|
|
|
struct mlx4_cmd_mailbox *inbox,
|
|
|
|
struct mlx4_cmd_mailbox *outbox,
|
|
|
|
struct mlx4_cmd_info *cmd)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_slave_event_eq_info *event_eq =
|
2012-01-19 16:45:46 +07:00
|
|
|
priv->mfunc.master.slave_state[slave].event_eq;
|
2011-12-13 11:13:58 +07:00
|
|
|
u32 in_modifier = vhcr->in_modifier;
|
2013-03-21 12:55:51 +07:00
|
|
|
u32 eqn = in_modifier & 0x3FF;
|
2011-12-13 11:13:58 +07:00
|
|
|
u64 in_param = vhcr->in_param;
|
|
|
|
int err = 0;
|
2012-01-19 16:45:46 +07:00
|
|
|
int i;
|
2011-12-13 11:13:58 +07:00
|
|
|
|
|
|
|
if (slave == dev->caps.function)
|
|
|
|
err = mlx4_cmd(dev, in_param, (in_modifier & 0x80000000) | eqn,
|
|
|
|
0, MLX4_CMD_MAP_EQ, MLX4_CMD_TIME_CLASS_B,
|
|
|
|
MLX4_CMD_NATIVE);
|
2012-01-19 16:45:46 +07:00
|
|
|
if (!err)
|
|
|
|
for (i = 0; i < MLX4_EVENT_TYPES_NUM; ++i)
|
|
|
|
if (in_param & (1LL << i))
|
|
|
|
event_eq[i].eqn = in_modifier >> 31 ? -1 : eqn;
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
static int mlx4_MAP_EQ(struct mlx4_dev *dev, u64 event_mask, int unmap,
|
|
|
|
int eq_num)
|
|
|
|
{
|
|
|
|
return mlx4_cmd(dev, event_mask, (unmap << 31) | eq_num,
|
2011-12-13 11:10:51 +07:00
|
|
|
0, MLX4_CMD_MAP_EQ, MLX4_CMD_TIME_CLASS_B,
|
|
|
|
MLX4_CMD_WRAPPED);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_SW2HW_EQ(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox,
|
|
|
|
int eq_num)
|
|
|
|
{
|
2012-01-19 16:45:19 +07:00
|
|
|
return mlx4_cmd(dev, mailbox->dma, eq_num, 0,
|
2011-12-13 11:13:58 +07:00
|
|
|
MLX4_CMD_SW2HW_EQ, MLX4_CMD_TIME_CLASS_A,
|
2011-12-13 11:10:51 +07:00
|
|
|
MLX4_CMD_WRAPPED);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
2015-01-27 20:58:03 +07:00
|
|
|
static int mlx4_HW2SW_EQ(struct mlx4_dev *dev, int eq_num)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
2015-01-27 20:58:03 +07:00
|
|
|
return mlx4_cmd(dev, 0, eq_num, 1, MLX4_CMD_HW2SW_EQ,
|
|
|
|
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
static int mlx4_num_eq_uar(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Each UAR holds 4 EQ doorbells. To figure out how many UARs
|
|
|
|
* we need to map, take the difference of highest index and
|
|
|
|
* the lowest index we'll use and add 1.
|
|
|
|
*/
|
2015-05-31 13:30:16 +07:00
|
|
|
return (dev->caps.num_comp_vectors + 1 + dev->caps.reserved_eqs) / 4 -
|
|
|
|
dev->caps.reserved_eqs / 4 + 1;
|
2008-12-22 22:15:03 +07:00
|
|
|
}
|
|
|
|
|
2007-10-11 05:43:54 +07:00
|
|
|
static void __iomem *mlx4_get_eq_uar(struct mlx4_dev *dev, struct mlx4_eq *eq)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int index;
|
|
|
|
|
|
|
|
index = eq->eqn / 4 - dev->caps.reserved_eqs / 4;
|
|
|
|
|
|
|
|
if (!priv->eq_table.uar_map[index]) {
|
|
|
|
priv->eq_table.uar_map[index] =
|
net/mlx4_core: Set UAR page size to 4KB regardless of system page size
problem description:
The current code sets UAR page size equal to system page size.
The ConnectX-3 and ConnectX-3 Pro HWs require minimum 128 UAR pages.
The mlx4 kernel drivers are not loaded if there is less than 128 UAR pages.
solution:
Always set UAR page to 4KB. This allows more UAR pages if the OS
has PAGE_SIZE larger than 4KB. For example, PowerPC kernel use 64KB
system page size, with 4MB uar region, there are 4MB/2/64KB = 32
uars (half for uar, half for blueflame). This does not meet minimum 128
UAR pages requirement. With 4KB UAR page, there are 4MB/2/4KB = 512 uars
which meet the minimum requirement.
Note that only codes in mlx4_core that deal with firmware know that uar
page size is 4KB. Codes that deal with usr page in cq and qp context
(mlx4_ib, mlx4_en and part of mlx4_core) still have the same assumption
that uar page size equals to system page size.
Note that with this implementation, on 64KB system page size kernel, there
are 16 uars per system page but only one uars is used. The other 15
uars are ignored because of the above assumption.
Regarding SR-IOV, mlx4_core in hypervisor will set the uar page size
to 4KB and mlx4_core code in virtual OS will obtain the uar page size from
firmware.
Regarding backward compatibility in SR-IOV, if hypervisor has this new code,
the virtual OS must be updated. If hypervisor has old code, and the virtual
OS has this new code, the new code will be backward compatible with the
old code. If the uar size is big enough, this new code in VF continues to
work with 64 KB uar page size (on PowerPc kernel). If the uar size does not
meet 128 uars requirement, this new code not loaded in VF and print the same
error message as the old code in Hypervisor.
Signed-off-by: Huy Nguyen <huyn@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-17 22:24:26 +07:00
|
|
|
ioremap(
|
|
|
|
pci_resource_start(dev->persist->pdev, 2) +
|
|
|
|
((eq->eqn / 4) << (dev->uar_page_shift)),
|
|
|
|
(1 << (dev->uar_page_shift)));
|
2007-05-09 08:00:38 +07:00
|
|
|
if (!priv->eq_table.uar_map[index]) {
|
|
|
|
mlx4_err(dev, "Couldn't map EQ doorbell for EQN 0x%06x\n",
|
|
|
|
eq->eqn);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return priv->eq_table.uar_map[index] + 0x800 + 8 * (eq->eqn % 4);
|
|
|
|
}
|
|
|
|
|
2012-10-25 08:12:49 +07:00
|
|
|
static void mlx4_unmap_uar(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < mlx4_num_eq_uar(dev); ++i)
|
|
|
|
if (priv->eq_table.uar_map[i]) {
|
|
|
|
iounmap(priv->eq_table.uar_map[i]);
|
|
|
|
priv->eq_table.uar_map[i] = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-10-11 05:43:54 +07:00
|
|
|
static int mlx4_create_eq(struct mlx4_dev *dev, int nent,
|
|
|
|
u8 intr, struct mlx4_eq *eq)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
struct mlx4_eq_context *eq_context;
|
|
|
|
int npages;
|
|
|
|
u64 *dma_list = NULL;
|
|
|
|
dma_addr_t t;
|
|
|
|
u64 mtt_addr;
|
|
|
|
int err = -ENOMEM;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
eq->dev = dev;
|
|
|
|
eq->nent = roundup_pow_of_two(max(nent, 2));
|
2014-09-18 15:51:00 +07:00
|
|
|
/* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
|
|
|
|
* strides of 64B,128B and 256B.
|
|
|
|
*/
|
|
|
|
npages = PAGE_ALIGN(eq->nent * dev->caps.eqe_size) / PAGE_SIZE;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
eq->page_list = kmalloc(npages * sizeof *eq->page_list,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!eq->page_list)
|
|
|
|
goto err_out;
|
|
|
|
|
|
|
|
for (i = 0; i < npages; ++i)
|
|
|
|
eq->page_list[i].buf = NULL;
|
|
|
|
|
|
|
|
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
|
|
|
|
if (!dma_list)
|
|
|
|
goto err_out_free;
|
|
|
|
|
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
goto err_out_free;
|
|
|
|
eq_context = mailbox->buf;
|
|
|
|
|
|
|
|
for (i = 0; i < npages; ++i) {
|
2015-01-25 21:59:35 +07:00
|
|
|
eq->page_list[i].buf = dma_alloc_coherent(&dev->persist->
|
|
|
|
pdev->dev,
|
|
|
|
PAGE_SIZE, &t,
|
|
|
|
GFP_KERNEL);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (!eq->page_list[i].buf)
|
|
|
|
goto err_out_free_pages;
|
|
|
|
|
|
|
|
dma_list[i] = t;
|
|
|
|
eq->page_list[i].map = t;
|
|
|
|
|
|
|
|
memset(eq->page_list[i].buf, 0, PAGE_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
eq->eqn = mlx4_bitmap_alloc(&priv->eq_table.bitmap);
|
|
|
|
if (eq->eqn == -1)
|
|
|
|
goto err_out_free_pages;
|
|
|
|
|
|
|
|
eq->doorbell = mlx4_get_eq_uar(dev, eq);
|
|
|
|
if (!eq->doorbell) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_free_eq;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_mtt_init(dev, npages, PAGE_SHIFT, &eq->mtt);
|
|
|
|
if (err)
|
|
|
|
goto err_out_free_eq;
|
|
|
|
|
|
|
|
err = mlx4_write_mtt(dev, &eq->mtt, 0, npages, dma_list);
|
|
|
|
if (err)
|
|
|
|
goto err_out_free_mtt;
|
|
|
|
|
|
|
|
eq_context->flags = cpu_to_be32(MLX4_EQ_STATUS_OK |
|
|
|
|
MLX4_EQ_STATE_ARMED);
|
|
|
|
eq_context->log_eq_size = ilog2(eq->nent);
|
|
|
|
eq_context->intr = intr;
|
|
|
|
eq_context->log_page_size = PAGE_SHIFT - MLX4_ICM_PAGE_SHIFT;
|
|
|
|
|
|
|
|
mtt_addr = mlx4_mtt_addr(dev, &eq->mtt);
|
|
|
|
eq_context->mtt_base_addr_h = mtt_addr >> 32;
|
|
|
|
eq_context->mtt_base_addr_l = cpu_to_be32(mtt_addr & 0xffffffff);
|
|
|
|
|
|
|
|
err = mlx4_SW2HW_EQ(dev, mailbox, eq->eqn);
|
|
|
|
if (err) {
|
|
|
|
mlx4_warn(dev, "SW2HW_EQ failed (%d)\n", err);
|
|
|
|
goto err_out_free_mtt;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(dma_list);
|
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
|
|
|
|
eq->cons_index = 0;
|
|
|
|
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:53 +07:00
|
|
|
INIT_LIST_HEAD(&eq->tasklet_ctx.list);
|
|
|
|
INIT_LIST_HEAD(&eq->tasklet_ctx.process_list);
|
|
|
|
spin_lock_init(&eq->tasklet_ctx.lock);
|
|
|
|
tasklet_init(&eq->tasklet_ctx.task, mlx4_cq_tasklet_cb,
|
|
|
|
(unsigned long)&eq->tasklet_ctx);
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
return err;
|
|
|
|
|
|
|
|
err_out_free_mtt:
|
|
|
|
mlx4_mtt_cleanup(dev, &eq->mtt);
|
|
|
|
|
|
|
|
err_out_free_eq:
|
mlx4_core: Roll back round robin bitmap allocation commit for CQs, SRQs, and MPTs
Commit f4ec9e9 "mlx4_core: Change bitmap allocator to work in round-robin fashion"
introduced round-robin allocation (via bitmap) for all resources which allocate
via a bitmap.
Round robin allocation is desirable for mcgs, counters, pd's, UARs, and xrcds.
These are simply numbers, with no involvement of ICM memory mapping.
Round robin is required for QPs, since we had a problem with immediate
reuse of a 24-bit QP number (commit f4ec9e9).
However, for other resources which use the bitmap allocator and involve
mapping ICM memory -- MPTs, CQs, SRQs -- round-robin is not desirable.
What happens in these cases is the following:
ICM memory is allocated and mapped in chunks of 256K.
Since the resource allocation index goes up monotonically, the allocator
will eventually require mapping a new chunk. Now, chunks are also unmapped
when their reference count goes back to zero. Thus, if a single app is
running and starts/exits frequently we will have the following situation:
When the app starts, a new chunk must be allocated and mapped.
When the app exits, the chunk reference count goes back to zero, and the
chunk is unmapped and freed. Therefore, the app must pay the cost of allocation
and mapping of ICM memory each time it runs (although the price is paid only when
allocating the initial entry in the new chunk).
For apps which allocate MPTs/SRQs/CQs and which operate as described above,
this presented a performance problem.
We therefore roll back the round-robin allocator modification for MPTs, CQs, SRQs.
Reported-by: Matthew Finlay <matt@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-08 21:50:17 +07:00
|
|
|
mlx4_bitmap_free(&priv->eq_table.bitmap, eq->eqn, MLX4_USE_RR);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
err_out_free_pages:
|
|
|
|
for (i = 0; i < npages; ++i)
|
|
|
|
if (eq->page_list[i].buf)
|
2015-01-25 21:59:35 +07:00
|
|
|
dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
|
2007-05-09 08:00:38 +07:00
|
|
|
eq->page_list[i].buf,
|
|
|
|
eq->page_list[i].map);
|
|
|
|
|
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
|
|
|
|
err_out_free:
|
|
|
|
kfree(eq->page_list);
|
|
|
|
kfree(dma_list);
|
|
|
|
|
|
|
|
err_out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_free_eq(struct mlx4_dev *dev,
|
|
|
|
struct mlx4_eq *eq)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int err;
|
|
|
|
int i;
|
2014-09-18 15:51:00 +07:00
|
|
|
/* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
|
|
|
|
* strides of 64B,128B and 256B
|
|
|
|
*/
|
|
|
|
int npages = PAGE_ALIGN(dev->caps.eqe_size * eq->nent) / PAGE_SIZE;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2015-01-27 20:58:03 +07:00
|
|
|
err = mlx4_HW2SW_EQ(dev, eq->eqn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "HW2SW_EQ failed (%d)\n", err);
|
|
|
|
|
2014-10-23 19:57:27 +07:00
|
|
|
synchronize_irq(eq->irq);
|
net/mlx4_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx4_en's and
IPoIB napi callbacks), but some of them did more work (for example,
the user-space RDMA stack uverbs' completion handler). Besides that,
doing more than the minimal work in ISR is generally considered wrong,
it could even lead to a hard lockup of the system. Since when a lot
of completion events are generated by the hardware, the loop over those
events could be so long, that we'll get into a hard lockup by the system
watchdog.
In order to avoid that, add a new way of invoking completion events
callbacks. In the interrupt itself, we add the CQs which receive completion
event to a per-EQ list and schedule a tasklet. In the tasklet context
we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:53 +07:00
|
|
|
tasklet_disable(&eq->tasklet_ctx.task);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
mlx4_mtt_cleanup(dev, &eq->mtt);
|
|
|
|
for (i = 0; i < npages; ++i)
|
2015-01-25 21:59:35 +07:00
|
|
|
dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
|
|
|
|
eq->page_list[i].buf,
|
|
|
|
eq->page_list[i].map);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
kfree(eq->page_list);
|
mlx4_core: Roll back round robin bitmap allocation commit for CQs, SRQs, and MPTs
Commit f4ec9e9 "mlx4_core: Change bitmap allocator to work in round-robin fashion"
introduced round-robin allocation (via bitmap) for all resources which allocate
via a bitmap.
Round robin allocation is desirable for mcgs, counters, pd's, UARs, and xrcds.
These are simply numbers, with no involvement of ICM memory mapping.
Round robin is required for QPs, since we had a problem with immediate
reuse of a 24-bit QP number (commit f4ec9e9).
However, for other resources which use the bitmap allocator and involve
mapping ICM memory -- MPTs, CQs, SRQs -- round-robin is not desirable.
What happens in these cases is the following:
ICM memory is allocated and mapped in chunks of 256K.
Since the resource allocation index goes up monotonically, the allocator
will eventually require mapping a new chunk. Now, chunks are also unmapped
when their reference count goes back to zero. Thus, if a single app is
running and starts/exits frequently we will have the following situation:
When the app starts, a new chunk must be allocated and mapped.
When the app exits, the chunk reference count goes back to zero, and the
chunk is unmapped and freed. Therefore, the app must pay the cost of allocation
and mapping of ICM memory each time it runs (although the price is paid only when
allocating the initial entry in the new chunk).
For apps which allocate MPTs/SRQs/CQs and which operate as described above,
this presented a performance problem.
We therefore roll back the round-robin allocator modification for MPTs, CQs, SRQs.
Reported-by: Matthew Finlay <matt@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-08 21:50:17 +07:00
|
|
|
mlx4_bitmap_free(&priv->eq_table.bitmap, eq->eqn, MLX4_USE_RR);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_free_irqs(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_eq_table *eq_table = &mlx4_priv(dev)->eq_table;
|
2015-05-31 13:30:16 +07:00
|
|
|
int i;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
if (eq_table->have_irq)
|
2015-01-25 21:59:35 +07:00
|
|
|
free_irq(dev->persist->pdev->irq, dev);
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2009-06-15 03:30:45 +07:00
|
|
|
if (eq_table->eq[i].have_irq) {
|
2015-05-31 13:30:17 +07:00
|
|
|
free_cpumask_var(eq_table->eq[i].affinity_mask);
|
|
|
|
#if defined(CONFIG_SMP)
|
|
|
|
irq_set_affinity_hint(eq_table->eq[i].irq, NULL);
|
|
|
|
#endif
|
2007-05-09 08:00:38 +07:00
|
|
|
free_irq(eq_table->eq[i].irq, eq_table->eq + i);
|
2009-06-15 03:30:45 +07:00
|
|
|
eq_table->eq[i].have_irq = 0;
|
|
|
|
}
|
2008-12-22 22:15:03 +07:00
|
|
|
|
|
|
|
kfree(eq_table->irq_names);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
2007-10-11 05:43:54 +07:00
|
|
|
static int mlx4_map_clr_int(struct mlx4_dev *dev)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
2015-01-25 21:59:35 +07:00
|
|
|
priv->clr_base = ioremap(pci_resource_start(dev->persist->pdev,
|
|
|
|
priv->fw.clr_int_bar) +
|
2007-05-09 08:00:38 +07:00
|
|
|
priv->fw.clr_int_base, MLX4_CLR_INT_SIZE);
|
|
|
|
if (!priv->clr_base) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_err(dev, "Couldn't map interrupt clear register, aborting\n");
|
2007-05-09 08:00:38 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_unmap_clr_int(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
iounmap(priv->clr_base);
|
|
|
|
}
|
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
int mlx4_alloc_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
priv->eq_table.eq = kcalloc(dev->caps.num_eqs - dev->caps.reserved_eqs,
|
|
|
|
sizeof *priv->eq_table.eq, GFP_KERNEL);
|
|
|
|
if (!priv->eq_table.eq)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_free_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
kfree(mlx4_priv(dev)->eq_table.eq);
|
|
|
|
}
|
|
|
|
|
2007-10-11 05:43:54 +07:00
|
|
|
int mlx4_init_eq_table(struct mlx4_dev *dev)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int err;
|
|
|
|
int i;
|
|
|
|
|
2012-02-12 22:14:39 +07:00
|
|
|
priv->eq_table.uar_map = kcalloc(mlx4_num_eq_uar(dev),
|
|
|
|
sizeof *priv->eq_table.uar_map,
|
|
|
|
GFP_KERNEL);
|
2008-12-22 22:15:03 +07:00
|
|
|
if (!priv->eq_table.uar_map) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_free;
|
|
|
|
}
|
|
|
|
|
2014-11-13 19:45:32 +07:00
|
|
|
err = mlx4_bitmap_init(&priv->eq_table.bitmap,
|
|
|
|
roundup_pow_of_two(dev->caps.num_eqs),
|
|
|
|
dev->caps.num_eqs - 1,
|
|
|
|
dev->caps.reserved_eqs,
|
|
|
|
roundup_pow_of_two(dev->caps.num_eqs) -
|
|
|
|
dev->caps.num_eqs);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
2008-12-22 22:15:03 +07:00
|
|
|
goto err_out_free;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
for (i = 0; i < mlx4_num_eq_uar(dev); ++i)
|
2007-05-09 08:00:38 +07:00
|
|
|
priv->eq_table.uar_map[i] = NULL;
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
if (!mlx4_is_slave(dev)) {
|
|
|
|
err = mlx4_map_clr_int(dev);
|
|
|
|
if (err)
|
|
|
|
goto err_out_bitmap;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
priv->eq_table.clr_mask =
|
|
|
|
swab32(1 << (priv->eq_table.inta_pin & 31));
|
|
|
|
priv->eq_table.clr_int = priv->clr_base +
|
|
|
|
(priv->eq_table.inta_pin < 32 ? 4 : 0);
|
|
|
|
}
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2009-09-06 10:24:50 +07:00
|
|
|
priv->eq_table.irq_names =
|
2015-05-31 13:30:16 +07:00
|
|
|
kmalloc(MLX4_IRQNAME_SIZE * (dev->caps.num_comp_vectors + 1),
|
2009-09-06 10:24:50 +07:00
|
|
|
GFP_KERNEL);
|
2008-12-22 22:15:03 +07:00
|
|
|
if (!priv->eq_table.irq_names) {
|
|
|
|
err = -ENOMEM;
|
2015-05-31 13:30:16 +07:00
|
|
|
goto err_out_clr_int;
|
2008-12-22 22:15:03 +07:00
|
|
|
}
|
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i) {
|
|
|
|
if (i == MLX4_EQ_ASYNC) {
|
|
|
|
err = mlx4_create_eq(dev,
|
|
|
|
MLX4_NUM_ASYNC_EQE + MLX4_NUM_SPARE_EQE,
|
|
|
|
0, &priv->eq_table.eq[MLX4_EQ_ASYNC]);
|
|
|
|
} else {
|
|
|
|
struct mlx4_eq *eq = &priv->eq_table.eq[i];
|
2015-06-02 14:29:48 +07:00
|
|
|
#ifdef CONFIG_RFS_ACCEL
|
2015-05-31 13:30:16 +07:00
|
|
|
int port = find_first_bit(eq->actv_ports.ports,
|
|
|
|
dev->caps.num_ports) + 1;
|
|
|
|
|
|
|
|
if (port <= dev->caps.num_ports) {
|
|
|
|
struct mlx4_port_info *info =
|
|
|
|
&mlx4_priv(dev)->port[port];
|
|
|
|
|
|
|
|
if (!info->rmap) {
|
|
|
|
info->rmap = alloc_irq_cpu_rmap(
|
|
|
|
mlx4_get_eqs_per_port(dev, port));
|
|
|
|
if (!info->rmap) {
|
|
|
|
mlx4_warn(dev, "Failed to allocate cpu rmap\n");
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_out_unmap;
|
|
|
|
}
|
|
|
|
}
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
err = irq_cpu_rmap_add(
|
|
|
|
info->rmap, eq->irq);
|
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "Failed adding irq rmap\n");
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
err = mlx4_create_eq(dev, dev->caps.num_cqs -
|
|
|
|
dev->caps.reserved_cqs +
|
|
|
|
MLX4_NUM_SPARE_EQE,
|
|
|
|
(dev->flags & MLX4_FLAG_MSI_X) ?
|
|
|
|
i + 1 - !!(i > MLX4_EQ_ASYNC) : 0,
|
|
|
|
eq);
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
if (err)
|
|
|
|
goto err_out_unmap;
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
if (dev->flags & MLX4_FLAG_MSI_X) {
|
2008-12-22 22:15:03 +07:00
|
|
|
const char *eq_name;
|
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
snprintf(priv->eq_table.irq_names +
|
|
|
|
MLX4_EQ_ASYNC * MLX4_IRQNAME_SIZE,
|
|
|
|
MLX4_IRQNAME_SIZE,
|
|
|
|
"mlx4-async@pci:%s",
|
|
|
|
pci_name(dev->persist->pdev));
|
|
|
|
eq_name = priv->eq_table.irq_names +
|
|
|
|
MLX4_EQ_ASYNC * MLX4_IRQNAME_SIZE;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
err = request_irq(priv->eq_table.eq[MLX4_EQ_ASYNC].irq,
|
|
|
|
mlx4_msi_x_interrupt, 0, eq_name,
|
|
|
|
priv->eq_table.eq + MLX4_EQ_ASYNC);
|
|
|
|
if (err)
|
|
|
|
goto err_out_unmap;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
priv->eq_table.eq[MLX4_EQ_ASYNC].have_irq = 1;
|
2007-05-09 08:00:38 +07:00
|
|
|
} else {
|
2009-09-06 10:24:50 +07:00
|
|
|
snprintf(priv->eq_table.irq_names,
|
|
|
|
MLX4_IRQNAME_SIZE,
|
|
|
|
DRV_NAME "@pci:%s",
|
2015-01-25 21:59:35 +07:00
|
|
|
pci_name(dev->persist->pdev));
|
|
|
|
err = request_irq(dev->persist->pdev->irq, mlx4_interrupt,
|
2009-09-06 10:24:50 +07:00
|
|
|
IRQF_SHARED, priv->eq_table.irq_names, dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
2015-05-31 13:30:16 +07:00
|
|
|
goto err_out_unmap;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
priv->eq_table.have_irq = 1;
|
|
|
|
}
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2015-05-31 13:30:16 +07:00
|
|
|
priv->eq_table.eq[MLX4_EQ_ASYNC].eqn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
mlx4_warn(dev, "MAP_EQ for async EQ %d failed (%d)\n",
|
2015-05-31 13:30:16 +07:00
|
|
|
priv->eq_table.eq[MLX4_EQ_ASYNC].eqn, err);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
/* arm ASYNC eq */
|
|
|
|
eq_set_ci(&priv->eq_table.eq[MLX4_EQ_ASYNC], 1);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_out_unmap:
|
2016-09-14 18:09:24 +07:00
|
|
|
while (i > 0)
|
|
|
|
mlx4_free_eq(dev, &priv->eq_table.eq[--i]);
|
2015-05-31 13:30:16 +07:00
|
|
|
#ifdef CONFIG_RFS_ACCEL
|
|
|
|
for (i = 1; i <= dev->caps.num_ports; i++) {
|
|
|
|
if (mlx4_priv(dev)->port[i].rmap) {
|
|
|
|
free_irq_cpu_rmap(mlx4_priv(dev)->port[i].rmap);
|
|
|
|
mlx4_priv(dev)->port[i].rmap = NULL;
|
|
|
|
}
|
2008-12-22 22:15:03 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
#endif
|
|
|
|
mlx4_free_irqs(dev);
|
|
|
|
|
|
|
|
err_out_clr_int:
|
2011-12-13 11:13:58 +07:00
|
|
|
if (!mlx4_is_slave(dev))
|
|
|
|
mlx4_unmap_clr_int(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2008-12-22 22:15:03 +07:00
|
|
|
err_out_bitmap:
|
2012-10-25 08:12:49 +07:00
|
|
|
mlx4_unmap_uar(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_bitmap_cleanup(&priv->eq_table.bitmap);
|
2008-12-22 22:15:03 +07:00
|
|
|
|
|
|
|
err_out_free:
|
|
|
|
kfree(priv->eq_table.uar_map);
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_cleanup_eq_table(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 1,
|
2015-05-31 13:30:16 +07:00
|
|
|
priv->eq_table.eq[MLX4_EQ_ASYNC].eqn);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
#ifdef CONFIG_RFS_ACCEL
|
|
|
|
for (i = 1; i <= dev->caps.num_ports; i++) {
|
|
|
|
if (mlx4_priv(dev)->port[i].rmap) {
|
|
|
|
free_irq_cpu_rmap(mlx4_priv(dev)->port[i].rmap);
|
|
|
|
mlx4_priv(dev)->port[i].rmap = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_free_irqs(dev);
|
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_free_eq(dev, &priv->eq_table.eq[i]);
|
|
|
|
|
2011-12-13 11:13:58 +07:00
|
|
|
if (!mlx4_is_slave(dev))
|
|
|
|
mlx4_unmap_clr_int(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2012-10-25 08:12:49 +07:00
|
|
|
mlx4_unmap_uar(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_bitmap_cleanup(&priv->eq_table.bitmap);
|
2008-12-22 22:15:03 +07:00
|
|
|
|
|
|
|
kfree(priv->eq_table.uar_map);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
2010-08-24 10:46:18 +07:00
|
|
|
|
|
|
|
/* A test that verifies that we can accept interrupts on all
|
|
|
|
* the irq vectors of the device.
|
|
|
|
* Interrupts are checked using the NOP command.
|
|
|
|
*/
|
|
|
|
int mlx4_test_interrupts(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
int i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = mlx4_NOP(dev);
|
|
|
|
/* When not in MSI_X, there is only one irq to check */
|
2011-12-13 11:13:58 +07:00
|
|
|
if (!(dev->flags & MLX4_FLAG_MSI_X) || mlx4_is_slave(dev))
|
2010-08-24 10:46:18 +07:00
|
|
|
return err;
|
|
|
|
|
|
|
|
/* A loop over all completion vectors, for each vector we will check
|
|
|
|
* whether it works by mapping command completions to that vector
|
|
|
|
* and performing a NOP command
|
|
|
|
*/
|
|
|
|
for(i = 0; !err && (i < dev->caps.num_comp_vectors); ++i) {
|
2015-10-08 19:26:15 +07:00
|
|
|
/* Make sure request_irq was called */
|
|
|
|
if (!priv->eq_table.eq[i].have_irq)
|
|
|
|
continue;
|
|
|
|
|
2010-08-24 10:46:18 +07:00
|
|
|
/* Temporary use polling for command completions */
|
|
|
|
mlx4_cmd_use_polling(dev);
|
|
|
|
|
2012-09-20 08:48:02 +07:00
|
|
|
/* Map the new eq to handle all asynchronous events */
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2010-08-24 10:46:18 +07:00
|
|
|
priv->eq_table.eq[i].eqn);
|
|
|
|
if (err) {
|
|
|
|
mlx4_warn(dev, "Failed mapping eq for interrupt test\n");
|
|
|
|
mlx4_cmd_use_events(dev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Go back to using events */
|
|
|
|
mlx4_cmd_use_events(dev);
|
|
|
|
err = mlx4_NOP(dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return to default */
|
mlx4: Use port management change event instead of smp_snoop
The port management change event can replace smp_snoop. If the
capability bit for this event is set in dev-caps, the event is used
(by the driver setting the PORT_MNG_CHG_EVENT bit in the async event
mask in the MAP_EQ fw command). In this case, when the driver passes
incoming SMP PORT_INFO SET mads to the FW, the FW generates port
management change events to signal any changes to the driver.
If the FW generates these events, smp_snoop shouldn't be invoked in
ib_process_mad(), or duplicate events will occur (once from the
FW-generated event, and once from smp_snoop).
In the case where the FW does not generate port management change
events smp_snoop needs to be invoked to create these events. The flow
in smp_snoop has been modified to make use of the same procedures as
in the fw-generated-event event case to generate the port management
events (LID change, Client-rereg, Pkey change, and/or GID change).
Port management change event handling required changing the
mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument
(last argument) had to be changed to unsigned long in order to
accomodate passing the EQE pointer.
We also needed to move the definition of struct mlx4_eqe from
net/mlx4.h to file device.h -- to make it available to the IB driver,
to handle port management change events.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-06-19 15:21:40 +07:00
|
|
|
mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
2015-05-31 13:30:16 +07:00
|
|
|
priv->eq_table.eq[MLX4_EQ_ASYNC].eqn);
|
2010-08-24 10:46:18 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_test_interrupts);
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
bool mlx4_is_eq_vector_valid(struct mlx4_dev *dev, u8 port, int vector)
|
2011-03-23 05:37:47 +07:00
|
|
|
{
|
2015-05-31 13:30:16 +07:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
vector = MLX4_CQ_TO_EQ_VECTOR(vector);
|
|
|
|
if (vector < 0 || (vector >= dev->caps.num_comp_vectors + 1) ||
|
|
|
|
(vector == MLX4_EQ_ASYNC))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return test_bit(port - 1, priv->eq_table.eq[vector].actv_ports.ports);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_is_eq_vector_valid);
|
|
|
|
|
|
|
|
u32 mlx4_get_eqs_per_port(struct mlx4_dev *dev, u8 port)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
unsigned int i;
|
|
|
|
unsigned int sum = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < dev->caps.num_comp_vectors + 1; i++)
|
|
|
|
sum += !!test_bit(port - 1,
|
|
|
|
priv->eq_table.eq[i].actv_ports.ports);
|
|
|
|
|
|
|
|
return sum;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_get_eqs_per_port);
|
|
|
|
|
|
|
|
int mlx4_is_eq_shared(struct mlx4_dev *dev, int vector)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
|
|
|
vector = MLX4_CQ_TO_EQ_VECTOR(vector);
|
|
|
|
if (vector <= 0 || (vector >= dev->caps.num_comp_vectors + 1))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return !!(bitmap_weight(priv->eq_table.eq[vector].actv_ports.ports,
|
|
|
|
dev->caps.num_ports) > 1);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_is_eq_shared);
|
|
|
|
|
|
|
|
struct cpu_rmap *mlx4_get_cpu_rmap(struct mlx4_dev *dev, int port)
|
|
|
|
{
|
|
|
|
return mlx4_priv(dev)->port[port].rmap;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_get_cpu_rmap);
|
|
|
|
|
|
|
|
int mlx4_assign_eq(struct mlx4_dev *dev, u8 port, int *vector)
|
|
|
|
{
|
2011-03-23 05:37:47 +07:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2015-05-31 13:30:16 +07:00
|
|
|
int err = 0, i = 0;
|
|
|
|
u32 min_ref_count_val = (u32)-1;
|
|
|
|
int requested_vector = MLX4_CQ_TO_EQ_VECTOR(*vector);
|
|
|
|
int *prequested_vector = NULL;
|
|
|
|
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2012-02-21 10:39:32 +07:00
|
|
|
mutex_lock(&priv->msix_ctl.pool_lock);
|
2015-05-31 13:30:16 +07:00
|
|
|
if (requested_vector < (dev->caps.num_comp_vectors + 1) &&
|
|
|
|
(requested_vector >= 0) &&
|
|
|
|
(requested_vector != MLX4_EQ_ASYNC)) {
|
|
|
|
if (test_bit(port - 1,
|
|
|
|
priv->eq_table.eq[requested_vector].actv_ports.ports)) {
|
|
|
|
prequested_vector = &requested_vector;
|
|
|
|
} else {
|
|
|
|
struct mlx4_eq *eq;
|
|
|
|
|
|
|
|
for (i = 1; i < port;
|
|
|
|
requested_vector += mlx4_get_eqs_per_port(dev, i++))
|
|
|
|
;
|
|
|
|
|
|
|
|
eq = &priv->eq_table.eq[requested_vector];
|
|
|
|
if (requested_vector < dev->caps.num_comp_vectors + 1 &&
|
|
|
|
test_bit(port - 1, eq->actv_ports.ports)) {
|
|
|
|
prequested_vector = &requested_vector;
|
2012-07-19 05:33:51 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!prequested_vector) {
|
|
|
|
requested_vector = -1;
|
|
|
|
for (i = 0; min_ref_count_val && i < dev->caps.num_comp_vectors + 1;
|
|
|
|
i++) {
|
|
|
|
struct mlx4_eq *eq = &priv->eq_table.eq[i];
|
|
|
|
|
|
|
|
if (min_ref_count_val > eq->ref_count &&
|
|
|
|
test_bit(port - 1, eq->actv_ports.ports)) {
|
|
|
|
min_ref_count_val = eq->ref_count;
|
|
|
|
requested_vector = i;
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
}
|
2014-05-14 16:15:10 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
if (requested_vector < 0) {
|
|
|
|
err = -ENOSPC;
|
|
|
|
goto err_unlock;
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
|
|
|
|
prequested_vector = &requested_vector;
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
2015-05-31 13:30:16 +07:00
|
|
|
|
|
|
|
if (!test_bit(*prequested_vector, priv->msix_ctl.pool_bm) &&
|
|
|
|
dev->flags & MLX4_FLAG_MSI_X) {
|
|
|
|
set_bit(*prequested_vector, priv->msix_ctl.pool_bm);
|
|
|
|
snprintf(priv->eq_table.irq_names +
|
|
|
|
*prequested_vector * MLX4_IRQNAME_SIZE,
|
|
|
|
MLX4_IRQNAME_SIZE, "mlx4-%d@%s",
|
|
|
|
*prequested_vector, dev_name(&dev->persist->pdev->dev));
|
|
|
|
|
|
|
|
err = request_irq(priv->eq_table.eq[*prequested_vector].irq,
|
|
|
|
mlx4_msi_x_interrupt, 0,
|
|
|
|
&priv->eq_table.irq_names[*prequested_vector << 5],
|
|
|
|
priv->eq_table.eq + *prequested_vector);
|
|
|
|
|
|
|
|
if (err) {
|
|
|
|
clear_bit(*prequested_vector, priv->msix_ctl.pool_bm);
|
|
|
|
*prequested_vector = -1;
|
|
|
|
} else {
|
2015-05-31 13:30:17 +07:00
|
|
|
#if defined(CONFIG_SMP)
|
|
|
|
mlx4_set_eq_affinity_hint(priv, *prequested_vector);
|
|
|
|
#endif
|
2015-05-31 13:30:16 +07:00
|
|
|
eq_set_ci(&priv->eq_table.eq[*prequested_vector], 1);
|
|
|
|
priv->eq_table.eq[*prequested_vector].have_irq = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!err && *prequested_vector >= 0)
|
|
|
|
priv->eq_table.eq[*prequested_vector].ref_count++;
|
|
|
|
|
|
|
|
err_unlock:
|
2012-02-21 10:39:32 +07:00
|
|
|
mutex_unlock(&priv->msix_ctl.pool_lock);
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
if (!err && *prequested_vector >= 0)
|
|
|
|
*vector = MLX4_EQ_TO_CQ_VECTOR(*prequested_vector);
|
|
|
|
else
|
2011-03-23 05:37:47 +07:00
|
|
|
*vector = 0;
|
2015-05-31 13:30:16 +07:00
|
|
|
|
2011-03-23 05:37:47 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_assign_eq);
|
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
int mlx4_eq_get_irq(struct mlx4_dev *dev, int cq_vec)
|
2014-06-29 15:54:55 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
return priv->eq_table.eq[MLX4_CQ_TO_EQ_VECTOR(cq_vec)].irq;
|
2014-06-29 15:54:55 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_eq_get_irq);
|
|
|
|
|
2011-03-23 05:37:47 +07:00
|
|
|
void mlx4_release_eq(struct mlx4_dev *dev, int vec)
|
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2015-05-31 13:30:16 +07:00
|
|
|
int eq_vec = MLX4_CQ_TO_EQ_VECTOR(vec);
|
|
|
|
|
|
|
|
mutex_lock(&priv->msix_ctl.pool_lock);
|
|
|
|
priv->eq_table.eq[eq_vec].ref_count--;
|
2011-03-23 05:37:47 +07:00
|
|
|
|
2015-05-31 13:30:16 +07:00
|
|
|
/* once we allocated EQ, we don't release it because it might be binded
|
|
|
|
* to cpu_rmap.
|
|
|
|
*/
|
|
|
|
mutex_unlock(&priv->msix_ctl.pool_lock);
|
2011-03-23 05:37:47 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(mlx4_release_eq);
|
|
|
|
|