2007-05-09 08:00:38 +07:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004 Topspin Communications. All rights reserved.
|
|
|
|
* Copyright (c) 2005, 2006, 2007 Cisco Systems, Inc. All rights reserved.
|
2008-07-26 00:32:52 +07:00
|
|
|
* Copyright (c) 2005, 2006, 2007, 2008 Mellanox Technologies. All rights reserved.
|
2007-05-09 08:00:38 +07:00
|
|
|
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
|
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/gfp.h>
|
2011-05-28 03:14:23 +07:00
|
|
|
#include <linux/export.h>
|
2011-12-13 11:13:22 +07:00
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
#include <linux/mlx4/cmd.h>
|
|
|
|
#include <linux/mlx4/qp.h>
|
|
|
|
|
|
|
|
#include "mlx4.h"
|
|
|
|
#include "icm.h"
|
|
|
|
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
/* QP to support BF should have bits 6,7 cleared */
|
|
|
|
#define MLX4_BF_QP_SKIP_MASK 0xc0
|
|
|
|
#define MLX4_MAX_BF_QP_RANGE 0x40
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
void mlx4_qp_event(struct mlx4_dev *dev, u32 qpn, int event_type)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
struct mlx4_qp *qp;
|
|
|
|
|
|
|
|
spin_lock(&qp_table->lock);
|
|
|
|
|
|
|
|
qp = __mlx4_qp_lookup(dev, qpn);
|
|
|
|
if (qp)
|
2017-10-20 14:23:38 +07:00
|
|
|
refcount_inc(&qp->refcount);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
spin_unlock(&qp_table->lock);
|
|
|
|
|
|
|
|
if (!qp) {
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_dbg(dev, "Async event for none existent QP %08x\n", qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
qp->event(qp, event_type);
|
|
|
|
|
2017-10-20 14:23:38 +07:00
|
|
|
if (refcount_dec_and_test(&qp->refcount))
|
2007-05-09 08:00:38 +07:00
|
|
|
complete(&qp->free);
|
|
|
|
}
|
|
|
|
|
2012-08-03 15:40:53 +07:00
|
|
|
/* used for INIT/CLOSE port logic */
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
static int is_master_qp0(struct mlx4_dev *dev, struct mlx4_qp *qp, int *real_qp0, int *proxy_qp0)
|
2011-12-13 11:13:22 +07:00
|
|
|
{
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
/* this procedure is called after we already know we are on the master */
|
2012-08-03 15:40:53 +07:00
|
|
|
/* qp0 is either the proxy qp0, or the real qp0 */
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
u32 pf_proxy_offset = dev->phys_caps.base_proxy_sqpn + 8 * mlx4_master_func_num(dev);
|
|
|
|
*proxy_qp0 = qp->qpn >= pf_proxy_offset && qp->qpn <= pf_proxy_offset + 1;
|
2012-08-03 15:40:53 +07:00
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
*real_qp0 = qp->qpn >= dev->phys_caps.base_sqpn &&
|
|
|
|
qp->qpn <= dev->phys_caps.base_sqpn + 1;
|
2012-08-03 15:40:53 +07:00
|
|
|
|
|
|
|
return *real_qp0 || *proxy_qp0;
|
2011-12-13 11:13:22 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int __mlx4_qp_modify(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
|
|
|
|
enum mlx4_qp_state cur_state, enum mlx4_qp_state new_state,
|
|
|
|
struct mlx4_qp_context *context,
|
|
|
|
enum mlx4_qp_optpar optpar,
|
|
|
|
int sqd_event, struct mlx4_qp *qp, int native)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
static const u16 op[MLX4_QP_NUM_STATE][MLX4_QP_NUM_STATE] = {
|
|
|
|
[MLX4_QP_STATE_RST] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_INIT] = MLX4_CMD_RST2INIT_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_INIT] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_INIT] = MLX4_CMD_INIT2INIT_QP,
|
|
|
|
[MLX4_QP_STATE_RTR] = MLX4_CMD_INIT2RTR_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_RTR] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_RTS] = MLX4_CMD_RTR2RTS_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_RTS] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_RTS] = MLX4_CMD_RTS2RTS_QP,
|
|
|
|
[MLX4_QP_STATE_SQD] = MLX4_CMD_RTS2SQD_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_SQD] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_RTS] = MLX4_CMD_SQD2RTS_QP,
|
|
|
|
[MLX4_QP_STATE_SQD] = MLX4_CMD_SQD2SQD_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_SQER] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
[MLX4_QP_STATE_RTS] = MLX4_CMD_SQERR2RTS_QP,
|
|
|
|
},
|
|
|
|
[MLX4_QP_STATE_ERR] = {
|
|
|
|
[MLX4_QP_STATE_RST] = MLX4_CMD_2RST_QP,
|
|
|
|
[MLX4_QP_STATE_ERR] = MLX4_CMD_2ERR_QP,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
int ret = 0;
|
2012-08-03 15:40:53 +07:00
|
|
|
int real_qp0 = 0;
|
|
|
|
int proxy_qp0 = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
u8 port;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
2007-11-21 04:01:28 +07:00
|
|
|
if (cur_state >= MLX4_QP_NUM_STATE || new_state >= MLX4_QP_NUM_STATE ||
|
2007-05-09 08:00:38 +07:00
|
|
|
!op[cur_state][new_state])
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
if (op[cur_state][new_state] == MLX4_CMD_2RST_QP) {
|
|
|
|
ret = mlx4_cmd(dev, 0, qp->qpn, 2,
|
|
|
|
MLX4_CMD_2RST_QP, MLX4_CMD_TIME_CLASS_A, native);
|
|
|
|
if (mlx4_is_master(dev) && cur_state != MLX4_QP_STATE_ERR &&
|
|
|
|
cur_state != MLX4_QP_STATE_RST &&
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
is_master_qp0(dev, qp, &real_qp0, &proxy_qp0)) {
|
2011-12-13 11:13:22 +07:00
|
|
|
port = (qp->qpn & 1) + 1;
|
2012-08-03 15:40:53 +07:00
|
|
|
if (proxy_qp0)
|
|
|
|
priv->mfunc.master.qp0_state[port].proxy_qp0_active = 0;
|
|
|
|
else
|
|
|
|
priv->mfunc.master.qp0_state[port].qp0_active = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
return PTR_ERR(mailbox);
|
|
|
|
|
|
|
|
if (cur_state == MLX4_QP_STATE_RST && new_state == MLX4_QP_STATE_INIT) {
|
|
|
|
u64 mtt_addr = mlx4_mtt_addr(dev, mtt);
|
|
|
|
context->mtt_base_addr_h = mtt_addr >> 32;
|
|
|
|
context->mtt_base_addr_l = cpu_to_be32(mtt_addr & 0xffffffff);
|
|
|
|
context->log_page_size = mtt->page_shift - MLX4_ICM_PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
|
2016-01-14 22:50:37 +07:00
|
|
|
if ((cur_state == MLX4_QP_STATE_RTR) &&
|
|
|
|
(new_state == MLX4_QP_STATE_RTS) &&
|
|
|
|
dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_ROCE_V1_V2)
|
|
|
|
context->roce_entropy =
|
|
|
|
cpu_to_be16(mlx4_qp_roce_entropy(dev, qp->qpn));
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
*(__be32 *) mailbox->buf = cpu_to_be32(optpar);
|
2017-08-16 00:29:19 +07:00
|
|
|
memcpy(mailbox->buf + 8, context, sizeof(*context));
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
((struct mlx4_qp_context *) (mailbox->buf + 8))->local_qpn =
|
|
|
|
cpu_to_be32(qp->qpn);
|
|
|
|
|
2012-01-19 16:45:19 +07:00
|
|
|
ret = mlx4_cmd(dev, mailbox->dma,
|
2011-12-13 11:13:22 +07:00
|
|
|
qp->qpn | (!!sqd_event << 31),
|
2007-05-09 08:00:38 +07:00
|
|
|
new_state == MLX4_QP_STATE_RST ? 2 : 0,
|
2011-12-13 11:13:22 +07:00
|
|
|
op[cur_state][new_state], MLX4_CMD_TIME_CLASS_C, native);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
if (mlx4_is_master(dev) && is_master_qp0(dev, qp, &real_qp0, &proxy_qp0)) {
|
2012-08-03 15:40:53 +07:00
|
|
|
port = (qp->qpn & 1) + 1;
|
|
|
|
if (cur_state != MLX4_QP_STATE_ERR &&
|
|
|
|
cur_state != MLX4_QP_STATE_RST &&
|
|
|
|
new_state == MLX4_QP_STATE_ERR) {
|
|
|
|
if (proxy_qp0)
|
|
|
|
priv->mfunc.master.qp0_state[port].proxy_qp0_active = 0;
|
|
|
|
else
|
|
|
|
priv->mfunc.master.qp0_state[port].qp0_active = 0;
|
|
|
|
} else if (new_state == MLX4_QP_STATE_RTR) {
|
|
|
|
if (proxy_qp0)
|
|
|
|
priv->mfunc.master.qp0_state[port].proxy_qp0_active = 1;
|
|
|
|
else
|
|
|
|
priv->mfunc.master.qp0_state[port].qp0_active = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
return ret;
|
|
|
|
}
|
2011-12-13 11:13:22 +07:00
|
|
|
|
|
|
|
int mlx4_qp_modify(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
|
|
|
|
enum mlx4_qp_state cur_state, enum mlx4_qp_state new_state,
|
|
|
|
struct mlx4_qp_context *context,
|
|
|
|
enum mlx4_qp_optpar optpar,
|
|
|
|
int sqd_event, struct mlx4_qp *qp)
|
|
|
|
{
|
|
|
|
return __mlx4_qp_modify(dev, mtt, cur_state, new_state, context,
|
|
|
|
optpar, sqd_event, qp, 0);
|
|
|
|
}
|
2007-05-09 08:00:38 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_modify);
|
|
|
|
|
mlx4_core: resource tracking for HCA resources used by guests
The resource tracker is used to track usage of HCA resources by the different
guests.
Virtual functions (VFs) are attached to guest operating systems but
resources are allocated from the same pool and are assigned to VFs. It is
essential that hostile/buggy guests not be able to affect the operation of
other VFs, possibly attached to other guest OSs since ConnectX firmware is not
tolerant to misuse of resources.
The resource tracker module associates each resource with a VF and maintains
state information for the allocated object. It also defines allowed state
transitions and enforces them.
Relationships between resources are also referred to. For example, CQs are
pointed to by QPs, so it is forbidden to destroy a CQ if a QP refers to it.
ICM memory is always accessible through the primary function and hence it is
allocated by the owner of the primary function.
When a guest dies, an FLR is generated for all the VFs it owns and all the
resources it used are freed.
The tracked resource types are: QPs, CQs, SRQs, MPTs, MTTs, MACs, RES_EQs,
and XRCDNs.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13 11:15:24 +07:00
|
|
|
int __mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align,
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
int *base, u8 flags)
|
2008-10-11 02:01:37 +07:00
|
|
|
{
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
u32 uid;
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
int bf_qp = !!(flags & (u8)MLX4_RESERVE_ETH_BF_QP);
|
|
|
|
|
2008-10-11 02:01:37 +07:00
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_qp_table *qp_table = &priv->qp_table;
|
|
|
|
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
if (cnt > MLX4_MAX_BF_QP_RANGE && bf_qp)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
uid = MLX4_QP_TABLE_ZONE_GENERAL;
|
|
|
|
if (flags & (u8)MLX4_RESERVE_A0_QP) {
|
|
|
|
if (bf_qp)
|
|
|
|
uid = MLX4_QP_TABLE_ZONE_RAW_ETH;
|
|
|
|
else
|
|
|
|
uid = MLX4_QP_TABLE_ZONE_RSS;
|
|
|
|
}
|
|
|
|
|
|
|
|
*base = mlx4_zone_alloc_entries(qp_table->zones, uid, cnt, align,
|
|
|
|
bf_qp ? MLX4_BF_QP_SKIP_MASK : 0, NULL);
|
2011-12-13 11:13:22 +07:00
|
|
|
if (*base == -1)
|
2008-10-11 02:01:37 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2011-12-13 11:13:22 +07:00
|
|
|
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
int mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align,
|
2017-06-21 13:29:36 +07:00
|
|
|
int *base, u8 flags, u8 usage)
|
2011-12-13 11:13:22 +07:00
|
|
|
{
|
2017-06-21 13:29:36 +07:00
|
|
|
u32 in_modifier = RES_QP | (((u32)usage & 3) << 30);
|
2013-03-07 10:46:54 +07:00
|
|
|
u64 in_param = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
u64 out_param;
|
|
|
|
int err;
|
|
|
|
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
/* Turn off all unsupported QP allocation flags */
|
|
|
|
flags &= dev->caps.alloc_res_qp_mask;
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
if (mlx4_is_mfunc(dev)) {
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
set_param_l(&in_param, (((u32)flags) << 24) | (u32)cnt);
|
2011-12-13 11:13:22 +07:00
|
|
|
set_param_h(&in_param, align);
|
|
|
|
err = mlx4_cmd_imm(dev, in_param, &out_param,
|
2017-06-21 13:29:36 +07:00
|
|
|
in_modifier, RES_OP_RESERVE,
|
2011-12-13 11:13:22 +07:00
|
|
|
MLX4_CMD_ALLOC_RES,
|
|
|
|
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
*base = get_param_l(&out_param);
|
|
|
|
return 0;
|
|
|
|
}
|
net/mlx4: Change QP allocation scheme
When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
The current Ethernet driver code reserves a Tx QP range with 256b alignment.
This is wrong because if there are more than 64 Tx QPs in use,
QPNs >= base + 65 will have bits 6/7 set.
This problem is not specific for the Ethernet driver, any entity that
tries to reserve more than 64 BF-enabled QPs should fail. Also, using
ranges is not necessary here and is wasteful.
The new mechanism introduced here will support reservation for
"Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
(when hypervisors support WC in VMs). The flow we use is:
1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
and request "BF enabled QPs" if BF is supported for the function
2. In the ALLOC_RES FW command, change param1 to:
a. param1[23:0] - number of QPs
b. param1[31-24] - flags controlling QPs reservation
Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
bits 6 and 7 unset in order to be used in Ethernet.
Bits 24-30 of the flags are currently reserved.
When a function tries to allocate a QP, it states the required attributes
for this QP. Those attributes are considered "best-effort". If an attribute,
such as Ethernet BF enabled QP, is a must-have attribute, the function has
to check that attribute is supported before trying to do the allocation.
In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
which are unsupported. If SRIOV is used, the PF validates those attributes
and masks out unsupported attributes as well. In order to notify VFs which
attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
mailbox is filled by the PF, which notifies which QP allocation attributes
it supports.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:54 +07:00
|
|
|
return __mlx4_qp_reserve_range(dev, cnt, align, base, flags);
|
2011-12-13 11:13:22 +07:00
|
|
|
}
|
2008-10-11 02:01:37 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_reserve_range);
|
|
|
|
|
mlx4_core: resource tracking for HCA resources used by guests
The resource tracker is used to track usage of HCA resources by the different
guests.
Virtual functions (VFs) are attached to guest operating systems but
resources are allocated from the same pool and are assigned to VFs. It is
essential that hostile/buggy guests not be able to affect the operation of
other VFs, possibly attached to other guest OSs since ConnectX firmware is not
tolerant to misuse of resources.
The resource tracker module associates each resource with a VF and maintains
state information for the allocated object. It also defines allowed state
transitions and enforces them.
Relationships between resources are also referred to. For example, CQs are
pointed to by QPs, so it is forbidden to destroy a CQ if a QP refers to it.
ICM memory is always accessible through the primary function and hence it is
allocated by the owner of the primary function.
When a guest dies, an FLR is generated for all the VFs it owns and all the
resources it used are freed.
The tracked resource types are: QPs, CQs, SRQs, MPTs, MTTs, MACs, RES_EQs,
and XRCDNs.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13 11:15:24 +07:00
|
|
|
void __mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt)
|
2008-10-11 02:01:37 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_qp_table *qp_table = &priv->qp_table;
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
if (mlx4_is_qp_reserved(dev, (u32) base_qpn))
|
|
|
|
return;
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
mlx4_zone_free_entries_unique(qp_table->zones, base_qpn, cnt);
|
2008-10-11 02:01:37 +07:00
|
|
|
}
|
2011-12-13 11:13:22 +07:00
|
|
|
|
|
|
|
void mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt)
|
|
|
|
{
|
2013-03-07 10:46:54 +07:00
|
|
|
u64 in_param = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
int err;
|
|
|
|
|
2018-01-12 12:58:40 +07:00
|
|
|
if (!cnt)
|
|
|
|
return;
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
if (mlx4_is_mfunc(dev)) {
|
|
|
|
set_param_l(&in_param, base_qpn);
|
|
|
|
set_param_h(&in_param, cnt);
|
|
|
|
err = mlx4_cmd(dev, in_param, RES_QP, RES_OP_RESERVE,
|
|
|
|
MLX4_CMD_FREE_RES,
|
|
|
|
MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
|
|
|
|
if (err) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_warn(dev, "Failed to release qp range base:%d cnt:%d\n",
|
|
|
|
base_qpn, cnt);
|
2011-12-13 11:13:22 +07:00
|
|
|
}
|
|
|
|
} else
|
|
|
|
__mlx4_qp_release_range(dev, base_qpn, cnt);
|
|
|
|
}
|
2008-10-11 02:01:37 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_release_range);
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
int __mlx4_qp_alloc_icm(struct mlx4_dev *dev, int qpn)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_qp_table *qp_table = &priv->qp_table;
|
|
|
|
int err;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_table_get(dev, &qp_table->qp_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
goto err_out;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_table_get(dev, &qp_table->auxc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
goto err_put_qp;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_table_get(dev, &qp_table->altc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
goto err_put_auxc;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_table_get(dev, &qp_table->rdmarc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
goto err_put_altc;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_table_get(dev, &qp_table->cmpt_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
|
|
|
goto err_put_rdmarc;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_put_rdmarc:
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_table_put(dev, &qp_table->rdmarc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
err_put_altc:
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_table_put(dev, &qp_table->altc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
err_put_auxc:
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_table_put(dev, &qp_table->auxc_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
err_put_qp:
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_table_put(dev, &qp_table->qp_table, qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
err_out:
|
|
|
|
return err;
|
|
|
|
}
|
2011-12-13 11:13:22 +07:00
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
static int mlx4_qp_alloc_icm(struct mlx4_dev *dev, int qpn)
|
2011-12-13 11:13:22 +07:00
|
|
|
{
|
2013-03-07 10:46:54 +07:00
|
|
|
u64 param = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
|
|
|
|
if (mlx4_is_mfunc(dev)) {
|
|
|
|
set_param_l(¶m, qpn);
|
|
|
|
return mlx4_cmd_imm(dev, param, ¶m, RES_QP, RES_OP_MAP_ICM,
|
|
|
|
MLX4_CMD_ALLOC_RES, MLX4_CMD_TIME_CLASS_A,
|
|
|
|
MLX4_CMD_WRAPPED);
|
|
|
|
}
|
2017-05-23 18:38:15 +07:00
|
|
|
return __mlx4_qp_alloc_icm(dev, qpn);
|
2011-12-13 11:13:22 +07:00
|
|
|
}
|
|
|
|
|
mlx4_core: resource tracking for HCA resources used by guests
The resource tracker is used to track usage of HCA resources by the different
guests.
Virtual functions (VFs) are attached to guest operating systems but
resources are allocated from the same pool and are assigned to VFs. It is
essential that hostile/buggy guests not be able to affect the operation of
other VFs, possibly attached to other guest OSs since ConnectX firmware is not
tolerant to misuse of resources.
The resource tracker module associates each resource with a VF and maintains
state information for the allocated object. It also defines allowed state
transitions and enforces them.
Relationships between resources are also referred to. For example, CQs are
pointed to by QPs, so it is forbidden to destroy a CQ if a QP refers to it.
ICM memory is always accessible through the primary function and hence it is
allocated by the owner of the primary function.
When a guest dies, an FLR is generated for all the VFs it owns and all the
resources it used are freed.
The tracked resource types are: QPs, CQs, SRQs, MPTs, MTTs, MACs, RES_EQs,
and XRCDNs.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13 11:15:24 +07:00
|
|
|
void __mlx4_qp_free_icm(struct mlx4_dev *dev, int qpn)
|
2011-12-13 11:13:22 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_qp_table *qp_table = &priv->qp_table;
|
|
|
|
|
|
|
|
mlx4_table_put(dev, &qp_table->cmpt_table, qpn);
|
|
|
|
mlx4_table_put(dev, &qp_table->rdmarc_table, qpn);
|
|
|
|
mlx4_table_put(dev, &qp_table->altc_table, qpn);
|
|
|
|
mlx4_table_put(dev, &qp_table->auxc_table, qpn);
|
|
|
|
mlx4_table_put(dev, &qp_table->qp_table, qpn);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_qp_free_icm(struct mlx4_dev *dev, int qpn)
|
|
|
|
{
|
2013-03-07 10:46:54 +07:00
|
|
|
u64 in_param = 0;
|
2011-12-13 11:13:22 +07:00
|
|
|
|
|
|
|
if (mlx4_is_mfunc(dev)) {
|
|
|
|
set_param_l(&in_param, qpn);
|
|
|
|
if (mlx4_cmd(dev, in_param, RES_QP, RES_OP_MAP_ICM,
|
|
|
|
MLX4_CMD_FREE_RES, MLX4_CMD_TIME_CLASS_A,
|
|
|
|
MLX4_CMD_WRAPPED))
|
|
|
|
mlx4_warn(dev, "Failed to free icm of qp:%d\n", qpn);
|
|
|
|
} else
|
|
|
|
__mlx4_qp_free_icm(dev, qpn);
|
|
|
|
}
|
|
|
|
|
2017-06-04 18:30:07 +07:00
|
|
|
struct mlx4_qp *mlx4_qp_lookup(struct mlx4_dev *dev, u32 qpn)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
struct mlx4_qp *qp;
|
|
|
|
|
|
|
|
spin_lock(&qp_table->lock);
|
|
|
|
|
|
|
|
qp = __mlx4_qp_lookup(dev, qpn);
|
|
|
|
|
|
|
|
spin_unlock(&qp_table->lock);
|
|
|
|
return qp;
|
|
|
|
}
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
int mlx4_qp_alloc(struct mlx4_dev *dev, int qpn, struct mlx4_qp *qp)
|
2011-12-13 11:13:22 +07:00
|
|
|
{
|
|
|
|
struct mlx4_priv *priv = mlx4_priv(dev);
|
|
|
|
struct mlx4_qp_table *qp_table = &priv->qp_table;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!qpn)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
qp->qpn = qpn;
|
|
|
|
|
2017-05-23 18:38:15 +07:00
|
|
|
err = mlx4_qp_alloc_icm(dev, qpn);
|
2011-12-13 11:13:22 +07:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
spin_lock_irq(&qp_table->lock);
|
|
|
|
err = radix_tree_insert(&dev->qp_table_tree, qp->qpn &
|
|
|
|
(dev->caps.num_qps - 1), qp);
|
|
|
|
spin_unlock_irq(&qp_table->lock);
|
|
|
|
if (err)
|
|
|
|
goto err_icm;
|
|
|
|
|
2017-10-20 14:23:38 +07:00
|
|
|
refcount_set(&qp->refcount, 1);
|
2011-12-13 11:13:22 +07:00
|
|
|
init_completion(&qp->free);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_icm:
|
|
|
|
mlx4_qp_free_icm(dev, qpn);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_alloc);
|
|
|
|
|
2014-09-10 20:41:56 +07:00
|
|
|
int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn,
|
2014-05-15 19:29:27 +07:00
|
|
|
enum mlx4_update_qp_attr attr,
|
|
|
|
struct mlx4_update_qp_params *params)
|
|
|
|
{
|
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
struct mlx4_update_qp_context *cmd;
|
|
|
|
u64 pri_addr_path_mask = 0;
|
2014-09-10 20:41:56 +07:00
|
|
|
u64 qp_mask = 0;
|
2014-05-15 19:29:27 +07:00
|
|
|
int err = 0;
|
|
|
|
|
2015-10-08 21:14:02 +07:00
|
|
|
if (!attr || (attr & ~MLX4_UPDATE_QP_SUPPORTED_ATTRS))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-05-15 19:29:27 +07:00
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
return PTR_ERR(mailbox);
|
|
|
|
|
|
|
|
cmd = (struct mlx4_update_qp_context *)mailbox->buf;
|
|
|
|
|
|
|
|
if (attr & MLX4_UPDATE_QP_SMAC) {
|
|
|
|
pri_addr_path_mask |= 1ULL << MLX4_UPD_QP_PATH_MASK_MAC_INDEX;
|
|
|
|
cmd->qp_context.pri_path.grh_mylmc = params->smac_index;
|
|
|
|
}
|
|
|
|
|
2015-10-15 18:44:38 +07:00
|
|
|
if (attr & MLX4_UPDATE_QP_ETH_SRC_CHECK_MC_LB) {
|
|
|
|
if (!(dev->caps.flags2
|
|
|
|
& MLX4_DEV_CAP_FLAG2_UPDATE_QP_SRC_CHECK_LB)) {
|
|
|
|
mlx4_warn(dev,
|
|
|
|
"Trying to set src check LB, but it isn't supported\n");
|
2017-02-23 17:02:41 +07:00
|
|
|
err = -EOPNOTSUPP;
|
2015-10-15 18:44:38 +07:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
pri_addr_path_mask |=
|
|
|
|
1ULL << MLX4_UPD_QP_PATH_MASK_ETH_SRC_CHECK_MC_LB;
|
|
|
|
if (params->flags &
|
|
|
|
MLX4_UPDATE_QP_PARAMS_FLAGS_ETH_CHECK_MC_LB) {
|
|
|
|
cmd->qp_context.pri_path.fl |=
|
|
|
|
MLX4_FL_ETH_SRC_CHECK_MC_LB;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-10 20:41:56 +07:00
|
|
|
if (attr & MLX4_UPDATE_QP_VSD) {
|
|
|
|
qp_mask |= 1ULL << MLX4_UPD_QP_MASK_VSD;
|
|
|
|
if (params->flags & MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE)
|
|
|
|
cmd->qp_context.param3 |= cpu_to_be32(MLX4_STRIP_VLAN);
|
|
|
|
}
|
|
|
|
|
2015-03-18 19:57:34 +07:00
|
|
|
if (attr & MLX4_UPDATE_QP_RATE_LIMIT) {
|
|
|
|
qp_mask |= 1ULL << MLX4_UPD_QP_MASK_RATE_LIMIT;
|
|
|
|
cmd->qp_context.rate_limit_params = cpu_to_be16((params->rate_unit << 14) | params->rate_val);
|
|
|
|
}
|
|
|
|
|
2015-04-02 20:31:15 +07:00
|
|
|
if (attr & MLX4_UPDATE_QP_QOS_VPORT) {
|
2017-06-05 14:44:56 +07:00
|
|
|
if (!(dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_QOS_VPP)) {
|
|
|
|
mlx4_warn(dev, "Granular QoS per VF is not enabled\n");
|
|
|
|
err = -EOPNOTSUPP;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2015-04-02 20:31:15 +07:00
|
|
|
qp_mask |= 1ULL << MLX4_UPD_QP_MASK_QOS_VPP;
|
|
|
|
cmd->qp_context.qos_vport = params->qos_vport;
|
|
|
|
}
|
|
|
|
|
2014-05-15 19:29:27 +07:00
|
|
|
cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask);
|
2014-09-10 20:41:56 +07:00
|
|
|
cmd->qp_mask = cpu_to_be64(qp_mask);
|
2014-05-15 19:29:27 +07:00
|
|
|
|
2014-09-10 20:41:56 +07:00
|
|
|
err = mlx4_cmd(dev, mailbox->dma, qpn & 0xffffff, 0,
|
2014-05-15 19:29:27 +07:00
|
|
|
MLX4_CMD_UPDATE_QP, MLX4_CMD_TIME_CLASS_A,
|
|
|
|
MLX4_CMD_NATIVE);
|
2015-10-15 18:44:38 +07:00
|
|
|
out:
|
2014-05-15 19:29:27 +07:00
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mlx4_update_qp);
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
void mlx4_qp_remove(struct mlx4_dev *dev, struct mlx4_qp *qp)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&qp_table->lock, flags);
|
|
|
|
radix_tree_delete(&dev->qp_table_tree, qp->qpn & (dev->caps.num_qps - 1));
|
|
|
|
spin_unlock_irqrestore(&qp_table->lock, flags);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_remove);
|
|
|
|
|
|
|
|
void mlx4_qp_free(struct mlx4_dev *dev, struct mlx4_qp *qp)
|
|
|
|
{
|
2017-10-20 14:23:38 +07:00
|
|
|
if (refcount_dec_and_test(&qp->refcount))
|
2007-05-09 08:00:38 +07:00
|
|
|
complete(&qp->free);
|
|
|
|
wait_for_completion(&qp->free);
|
|
|
|
|
2011-12-13 11:13:22 +07:00
|
|
|
mlx4_qp_free_icm(dev, qp->qpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_free);
|
|
|
|
|
|
|
|
static int mlx4_CONF_SPECIAL_QP(struct mlx4_dev *dev, u32 base_qpn)
|
|
|
|
{
|
|
|
|
return mlx4_cmd(dev, 0, base_qpn, 0, MLX4_CMD_CONF_SPECIAL_QP,
|
2011-12-13 11:10:51 +07:00
|
|
|
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
#define MLX4_QP_TABLE_RSS_ETH_PRIORITY 2
|
|
|
|
#define MLX4_QP_TABLE_RAW_ETH_PRIORITY 1
|
|
|
|
#define MLX4_QP_TABLE_RAW_ETH_SIZE 256
|
|
|
|
|
|
|
|
static int mlx4_create_zones(struct mlx4_dev *dev,
|
|
|
|
u32 reserved_bottom_general,
|
|
|
|
u32 reserved_top_general,
|
|
|
|
u32 reserved_bottom_rss,
|
|
|
|
u32 start_offset_rss,
|
|
|
|
u32 max_table_offset)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
struct mlx4_bitmap (*bitmap)[MLX4_QP_TABLE_ZONE_NUM] = NULL;
|
|
|
|
int bitmap_initialized = 0;
|
|
|
|
u32 last_offset;
|
|
|
|
int k;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
qp_table->zones = mlx4_zone_allocator_create(MLX4_ZONE_ALLOC_FLAGS_NO_OVERLAP);
|
|
|
|
|
|
|
|
if (NULL == qp_table->zones)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
bitmap = kmalloc(sizeof(*bitmap), GFP_KERNEL);
|
|
|
|
|
|
|
|
if (NULL == bitmap) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_zone;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_bitmap_init(*bitmap + MLX4_QP_TABLE_ZONE_GENERAL, dev->caps.num_qps,
|
|
|
|
(1 << 23) - 1, reserved_bottom_general,
|
|
|
|
reserved_top_general);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
goto free_bitmap;
|
|
|
|
|
|
|
|
++bitmap_initialized;
|
|
|
|
|
|
|
|
err = mlx4_zone_add_one(qp_table->zones, *bitmap + MLX4_QP_TABLE_ZONE_GENERAL,
|
|
|
|
MLX4_ZONE_FALLBACK_TO_HIGHER_PRIO |
|
|
|
|
MLX4_ZONE_USE_RR, 0,
|
|
|
|
0, qp_table->zones_uids + MLX4_QP_TABLE_ZONE_GENERAL);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
goto free_bitmap;
|
|
|
|
|
|
|
|
err = mlx4_bitmap_init(*bitmap + MLX4_QP_TABLE_ZONE_RSS,
|
|
|
|
reserved_bottom_rss,
|
|
|
|
reserved_bottom_rss - 1,
|
|
|
|
dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW],
|
|
|
|
reserved_bottom_rss - start_offset_rss);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
goto free_bitmap;
|
|
|
|
|
|
|
|
++bitmap_initialized;
|
|
|
|
|
|
|
|
err = mlx4_zone_add_one(qp_table->zones, *bitmap + MLX4_QP_TABLE_ZONE_RSS,
|
|
|
|
MLX4_ZONE_ALLOW_ALLOC_FROM_LOWER_PRIO |
|
|
|
|
MLX4_ZONE_ALLOW_ALLOC_FROM_EQ_PRIO |
|
|
|
|
MLX4_ZONE_USE_RR, MLX4_QP_TABLE_RSS_ETH_PRIORITY,
|
|
|
|
0, qp_table->zones_uids + MLX4_QP_TABLE_ZONE_RSS);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
goto free_bitmap;
|
|
|
|
|
|
|
|
last_offset = dev->caps.reserved_qps_cnt[MLX4_QP_REGION_FW];
|
|
|
|
/* We have a single zone for the A0 steering QPs area of the FW. This area
|
|
|
|
* needs to be split into subareas. One set of subareas is for RSS QPs
|
|
|
|
* (in which qp number bits 6 and/or 7 are set); the other set of subareas
|
|
|
|
* is for RAW_ETH QPs, which require that both bits 6 and 7 are zero.
|
|
|
|
* Currently, the values returned by the FW (A0 steering area starting qp number
|
|
|
|
* and A0 steering area size) are such that there are only two subareas -- one
|
|
|
|
* for RSS and one for RAW_ETH.
|
|
|
|
*/
|
|
|
|
for (k = MLX4_QP_TABLE_ZONE_RSS + 1; k < sizeof(*bitmap)/sizeof((*bitmap)[0]);
|
|
|
|
k++) {
|
|
|
|
int size;
|
|
|
|
u32 offset = start_offset_rss;
|
|
|
|
u32 bf_mask;
|
|
|
|
u32 requested_size;
|
|
|
|
|
|
|
|
/* Assuming MLX4_BF_QP_SKIP_MASK is consecutive ones, this calculates
|
|
|
|
* a mask of all LSB bits set until (and not including) the first
|
|
|
|
* set bit of MLX4_BF_QP_SKIP_MASK. For example, if MLX4_BF_QP_SKIP_MASK
|
|
|
|
* is 0xc0, bf_mask will be 0x3f.
|
|
|
|
*/
|
|
|
|
bf_mask = (MLX4_BF_QP_SKIP_MASK & ~(MLX4_BF_QP_SKIP_MASK - 1)) - 1;
|
|
|
|
requested_size = min((u32)MLX4_QP_TABLE_RAW_ETH_SIZE, bf_mask + 1);
|
|
|
|
|
|
|
|
if (((last_offset & MLX4_BF_QP_SKIP_MASK) &&
|
|
|
|
((int)(max_table_offset - last_offset)) >=
|
|
|
|
roundup_pow_of_two(MLX4_BF_QP_SKIP_MASK)) ||
|
|
|
|
(!(last_offset & MLX4_BF_QP_SKIP_MASK) &&
|
|
|
|
!((last_offset + requested_size - 1) &
|
|
|
|
MLX4_BF_QP_SKIP_MASK)))
|
|
|
|
size = requested_size;
|
|
|
|
else {
|
|
|
|
u32 candidate_offset =
|
|
|
|
(last_offset | MLX4_BF_QP_SKIP_MASK | bf_mask) + 1;
|
|
|
|
|
|
|
|
if (last_offset & MLX4_BF_QP_SKIP_MASK)
|
|
|
|
last_offset = candidate_offset;
|
|
|
|
|
|
|
|
/* From this point, the BF bits are 0 */
|
|
|
|
|
|
|
|
if (last_offset > max_table_offset) {
|
|
|
|
/* need to skip */
|
|
|
|
size = -1;
|
|
|
|
} else {
|
|
|
|
size = min3(max_table_offset - last_offset,
|
|
|
|
bf_mask - (last_offset & bf_mask),
|
|
|
|
requested_size);
|
|
|
|
if (size < requested_size) {
|
|
|
|
int candidate_size;
|
|
|
|
|
|
|
|
candidate_size = min3(
|
|
|
|
max_table_offset - candidate_offset,
|
|
|
|
bf_mask - (last_offset & bf_mask),
|
|
|
|
requested_size);
|
|
|
|
|
|
|
|
/* We will not take this path if last_offset was
|
|
|
|
* already set above to candidate_offset
|
|
|
|
*/
|
|
|
|
if (candidate_size > size) {
|
|
|
|
last_offset = candidate_offset;
|
|
|
|
size = candidate_size;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (size > 0) {
|
|
|
|
/* mlx4_bitmap_alloc_range will find a contiguous range of "size"
|
|
|
|
* QPs in which both bits 6 and 7 are zero, because we pass it the
|
|
|
|
* MLX4_BF_SKIP_MASK).
|
|
|
|
*/
|
|
|
|
offset = mlx4_bitmap_alloc_range(
|
|
|
|
*bitmap + MLX4_QP_TABLE_ZONE_RSS,
|
|
|
|
size, 1,
|
|
|
|
MLX4_BF_QP_SKIP_MASK);
|
|
|
|
|
|
|
|
if (offset == (u32)-1) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
last_offset = offset + size;
|
|
|
|
|
|
|
|
err = mlx4_bitmap_init(*bitmap + k, roundup_pow_of_two(size),
|
|
|
|
roundup_pow_of_two(size) - 1, 0,
|
|
|
|
roundup_pow_of_two(size) - size);
|
|
|
|
} else {
|
|
|
|
/* Add an empty bitmap, we'll allocate from different zones (since
|
|
|
|
* at least one is reserved)
|
|
|
|
*/
|
|
|
|
err = mlx4_bitmap_init(*bitmap + k, 1,
|
|
|
|
MLX4_QP_TABLE_RAW_ETH_SIZE - 1, 0,
|
|
|
|
0);
|
|
|
|
mlx4_bitmap_alloc_range(*bitmap + k, 1, 1, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
|
|
|
|
++bitmap_initialized;
|
|
|
|
|
|
|
|
err = mlx4_zone_add_one(qp_table->zones, *bitmap + k,
|
|
|
|
MLX4_ZONE_ALLOW_ALLOC_FROM_LOWER_PRIO |
|
|
|
|
MLX4_ZONE_ALLOW_ALLOC_FROM_EQ_PRIO |
|
|
|
|
MLX4_ZONE_USE_RR, MLX4_QP_TABLE_RAW_ETH_PRIORITY,
|
|
|
|
offset, qp_table->zones_uids + k);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
goto free_bitmap;
|
|
|
|
|
|
|
|
qp_table->bitmap_gen = *bitmap;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
|
|
|
|
free_bitmap:
|
|
|
|
for (k = 0; k < bitmap_initialized; k++)
|
|
|
|
mlx4_bitmap_cleanup(*bitmap + k);
|
|
|
|
kfree(bitmap);
|
|
|
|
free_zone:
|
|
|
|
mlx4_zone_allocator_destroy(qp_table->zones);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_cleanup_qp_zones(struct mlx4_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
|
|
|
|
if (qp_table->zones) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0;
|
|
|
|
i < sizeof(qp_table->zones_uids)/sizeof(qp_table->zones_uids[0]);
|
|
|
|
i++) {
|
|
|
|
struct mlx4_bitmap *bitmap =
|
|
|
|
mlx4_zone_get_bitmap(qp_table->zones,
|
|
|
|
qp_table->zones_uids[i]);
|
|
|
|
|
|
|
|
mlx4_zone_remove_one(qp_table->zones, qp_table->zones_uids[i]);
|
|
|
|
if (NULL == bitmap)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
mlx4_bitmap_cleanup(bitmap);
|
|
|
|
}
|
|
|
|
mlx4_zone_allocator_destroy(qp_table->zones);
|
|
|
|
kfree(qp_table->bitmap_gen);
|
|
|
|
qp_table->bitmap_gen = NULL;
|
|
|
|
qp_table->zones = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-10-11 05:43:54 +07:00
|
|
|
int mlx4_init_qp_table(struct mlx4_dev *dev)
|
2007-05-09 08:00:38 +07:00
|
|
|
{
|
|
|
|
struct mlx4_qp_table *qp_table = &mlx4_priv(dev)->qp_table;
|
|
|
|
int err;
|
2008-10-23 00:25:29 +07:00
|
|
|
int reserved_from_top = 0;
|
2014-12-11 15:57:55 +07:00
|
|
|
int reserved_from_bot;
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
int k;
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
int fixed_reserved_from_bot_rv = 0;
|
|
|
|
int bottom_reserved_for_rss_bitmap;
|
net/mlx4: Add support for A0 steering
Add the required firmware commands for A0 steering and a way to enable
that. The firmware support focuses on INIT_HCA, QUERY_HCA, QUERY_PORT,
QUERY_DEV_CAP and QUERY_FUNC_CAP commands. Those commands are used
to configure and query the device.
The different A0 DMFS (steering) modes are:
Static - optimized performance, but flow steering rules are
limited. This mode should be choosed explicitly by the user
in order to be used.
Dynamic - this mode should be explicitly choosed by the user.
In this mode, the FW works in optimized steering mode as long as
it can and afterwards automatically drops to classic (full) DMFS.
Disable - this mode should be explicitly choosed by the user.
The user instructs the system not to use optimized steering, even if
the FW supports Dynamic A0 DMFS (and thus will be able to use optimized
steering in Default A0 DMFS mode).
Default - this mode is implicitly choosed. In this mode, if the FW
supports Dynamic A0 DMFS, it'll work in this mode. Otherwise, it'll
work at Disable A0 DMFS mode.
Under SRIOV configuration, when the A0 steering mode is enabled,
older guest VF drivers who aren't using the RX QP allocation flag
(MLX4_RESERVE_A0_QP) will get a QP from the general range and
fail when attempting to register a steering rule. To avoid that,
the PF context behaviour is changed once on A0 static mode, to
require support for the allocation flag in VF drivers too.
In order to enable A0 steering, we use log_num_mgm_entry_size param.
If the value of the parameter is not positive, we treat the absolute
value of log_num_mgm_entry_size as a bit field. Setting bit 2 of this
bit field enables static A0 steering.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:58:00 +07:00
|
|
|
u32 max_table_offset = dev->caps.dmfs_high_rate_qpn_base +
|
|
|
|
dev->caps.dmfs_high_rate_qpn_range;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
|
|
|
spin_lock_init(&qp_table->lock);
|
|
|
|
INIT_RADIX_TREE(&dev->qp_table_tree, GFP_ATOMIC);
|
2011-12-13 11:13:22 +07:00
|
|
|
if (mlx4_is_slave(dev))
|
|
|
|
return 0;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
/* We reserve 2 extra QPs per port for the special QPs. The
|
2007-05-09 08:00:38 +07:00
|
|
|
* block of special QPs must be aligned to a multiple of 8, so
|
|
|
|
* round up.
|
2011-06-03 01:32:15 +07:00
|
|
|
*
|
|
|
|
* We also reserve the MSB of the 24-bit QP number to indicate
|
|
|
|
* that a QP is an XRC QP.
|
2007-05-09 08:00:38 +07:00
|
|
|
*/
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
for (k = 0; k <= MLX4_QP_REGION_BOTTOM; k++)
|
|
|
|
fixed_reserved_from_bot_rv += dev->caps.reserved_qps_cnt[k];
|
|
|
|
|
|
|
|
if (fixed_reserved_from_bot_rv < max_table_offset)
|
|
|
|
fixed_reserved_from_bot_rv = max_table_offset;
|
|
|
|
|
|
|
|
/* We reserve at least 1 extra for bitmaps that we don't have enough space for*/
|
|
|
|
bottom_reserved_for_rss_bitmap =
|
|
|
|
roundup_pow_of_two(fixed_reserved_from_bot_rv + 1);
|
|
|
|
dev->phys_caps.base_sqpn = ALIGN(bottom_reserved_for_rss_bitmap, 8);
|
2008-10-23 00:25:29 +07:00
|
|
|
|
|
|
|
{
|
|
|
|
int sort[MLX4_NUM_QP_REGION];
|
2015-06-10 23:33:06 +07:00
|
|
|
int i, j;
|
2008-10-23 00:25:29 +07:00
|
|
|
int last_base = dev->caps.num_qps;
|
|
|
|
|
|
|
|
for (i = 1; i < MLX4_NUM_QP_REGION; ++i)
|
|
|
|
sort[i] = i;
|
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
for (i = MLX4_NUM_QP_REGION; i > MLX4_QP_REGION_BOTTOM; --i) {
|
|
|
|
for (j = MLX4_QP_REGION_BOTTOM + 2; j < i; ++j) {
|
2008-10-23 00:25:29 +07:00
|
|
|
if (dev->caps.reserved_qps_cnt[sort[j]] >
|
2015-06-10 23:33:06 +07:00
|
|
|
dev->caps.reserved_qps_cnt[sort[j - 1]])
|
|
|
|
swap(sort[j], sort[j - 1]);
|
2008-10-23 00:25:29 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
for (i = MLX4_QP_REGION_BOTTOM + 1; i < MLX4_NUM_QP_REGION; ++i) {
|
2008-10-23 00:25:29 +07:00
|
|
|
last_base -= dev->caps.reserved_qps_cnt[sort[i]];
|
|
|
|
dev->caps.reserved_qps_base[sort[i]] = last_base;
|
|
|
|
reserved_from_top +=
|
|
|
|
dev->caps.reserved_qps_cnt[sort[i]];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 15:40:41 +07:00
|
|
|
/* Reserve 8 real SQPs in both native and SRIOV modes.
|
|
|
|
* In addition, in SRIOV mode, reserve 8 proxy SQPs per function
|
|
|
|
* (for all PFs and VFs), and 8 corresponding tunnel QPs.
|
|
|
|
* Each proxy SQP works opposite its own tunnel QP.
|
|
|
|
*
|
|
|
|
* The QPs are arranged as follows:
|
|
|
|
* a. 8 real SQPs
|
|
|
|
* b. All the proxy SQPs (8 per function)
|
|
|
|
* c. All the tunnel QPs (8 per function)
|
|
|
|
*/
|
2014-12-11 15:57:55 +07:00
|
|
|
reserved_from_bot = mlx4_num_reserved_sqps(dev);
|
|
|
|
if (reserved_from_bot + reserved_from_top > dev->caps.num_qps) {
|
|
|
|
mlx4_err(dev, "Number of reserved QPs is higher than number of QPs\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-08-03 15:40:41 +07:00
|
|
|
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
err = mlx4_create_zones(dev, reserved_from_bot, reserved_from_bot,
|
|
|
|
bottom_reserved_for_rss_bitmap,
|
|
|
|
fixed_reserved_from_bot_rv,
|
|
|
|
max_table_offset);
|
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
if (err)
|
|
|
|
return err;
|
2012-08-03 15:40:41 +07:00
|
|
|
|
|
|
|
if (mlx4_is_mfunc(dev)) {
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
/* for PPF use */
|
|
|
|
dev->phys_caps.base_proxy_sqpn = dev->phys_caps.base_sqpn + 8;
|
|
|
|
dev->phys_caps.base_tunnel_sqpn = dev->phys_caps.base_sqpn + 8 + 8 * MLX4_MFUNC_MAX;
|
|
|
|
|
|
|
|
/* In mfunc, calculate proxy and tunnel qp offsets for the PF here,
|
|
|
|
* since the PF does not call mlx4_slave_caps */
|
2017-09-01 00:07:24 +07:00
|
|
|
dev->caps.spec_qps = kcalloc(dev->caps.num_ports,
|
|
|
|
sizeof(*dev->caps.spec_qps),
|
|
|
|
GFP_KERNEL);
|
2017-08-28 20:38:20 +07:00
|
|
|
if (!dev->caps.spec_qps) {
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_mem;
|
|
|
|
}
|
2012-08-03 15:40:41 +07:00
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
for (k = 0; k < dev->caps.num_ports; k++) {
|
2017-08-28 20:38:20 +07:00
|
|
|
dev->caps.spec_qps[k].qp0_proxy = dev->phys_caps.base_proxy_sqpn +
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
8 * mlx4_master_func_num(dev) + k;
|
2017-08-28 20:38:20 +07:00
|
|
|
dev->caps.spec_qps[k].qp0_tunnel = dev->caps.spec_qps[k].qp0_proxy + 8 * MLX4_MFUNC_MAX;
|
|
|
|
dev->caps.spec_qps[k].qp1_proxy = dev->phys_caps.base_proxy_sqpn +
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
8 * mlx4_master_func_num(dev) + MLX4_MAX_PORTS + k;
|
2017-08-28 20:38:20 +07:00
|
|
|
dev->caps.spec_qps[k].qp1_tunnel = dev->caps.spec_qps[k].qp1_proxy + 8 * MLX4_MFUNC_MAX;
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
err = mlx4_CONF_SPECIAL_QP(dev, dev->phys_caps.base_sqpn);
|
2007-05-09 08:00:38 +07:00
|
|
|
if (err)
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
goto err_mem;
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
|
|
|
|
return err;
|
2007-05-09 08:00:38 +07:00
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
err_mem:
|
2017-08-28 20:38:20 +07:00
|
|
|
kfree(dev->caps.spec_qps);
|
|
|
|
dev->caps.spec_qps = NULL;
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
mlx4_cleanup_qp_zones(dev);
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 15:40:57 +07:00
|
|
|
return err;
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void mlx4_cleanup_qp_table(struct mlx4_dev *dev)
|
|
|
|
{
|
2011-12-13 11:13:22 +07:00
|
|
|
if (mlx4_is_slave(dev))
|
|
|
|
return;
|
|
|
|
|
2007-05-09 08:00:38 +07:00
|
|
|
mlx4_CONF_SPECIAL_QP(dev, 0);
|
net/mlx4: Add A0 hybrid steering
A0 hybrid steering is a form of high performance flow steering.
By using this mode, mlx4 cards use a fast limited table based steering,
in order to enable fast steering of unicast packets to a QP.
In order to implement A0 hybrid steering we allocate resources
from different zones:
(1) General range
(2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
we try hard to allocate the QP from range (2). Otherwise, we try hard not
to allocate from this range. However, when the system is pushed to its
limits and one needs every resource, the allocator uses every region it can.
Meaning, when we run out of raw-eth qps, the allocator allocates from the
general range (and the special-A0 area is no longer active). If we run out
of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
is also exhausted, the allocator will allocate from the general range
(and the A0 region is no longer active).
Note that if a raw-eth qp is allocated from the general range, it attempts
to allocate the range such that bits 6 and 7 (blueflame bits) in the
QP number are not set.
When the feature is used in SRIOV, the VF has to notify the PF what
kind of QP attributes it needs. In order to do that, along with the
"Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
to the combination of these bits, the PF tries to allocate a suitable QP.
In order to maintain backward compatibility (with older PFs), the PF
notifies which QP attributes it supports via QUERY_FUNC_CAP command.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-12-11 15:57:57 +07:00
|
|
|
|
|
|
|
mlx4_cleanup_qp_zones(dev);
|
2007-05-09 08:00:38 +07:00
|
|
|
}
|
2007-06-21 16:27:47 +07:00
|
|
|
|
|
|
|
int mlx4_qp_query(struct mlx4_dev *dev, struct mlx4_qp *qp,
|
|
|
|
struct mlx4_qp_context *context)
|
|
|
|
{
|
|
|
|
struct mlx4_cmd_mailbox *mailbox;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
mailbox = mlx4_alloc_cmd_mailbox(dev);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
return PTR_ERR(mailbox);
|
|
|
|
|
|
|
|
err = mlx4_cmd_box(dev, 0, mailbox->dma, qp->qpn, 0,
|
2011-12-13 11:10:51 +07:00
|
|
|
MLX4_CMD_QUERY_QP, MLX4_CMD_TIME_CLASS_A,
|
|
|
|
MLX4_CMD_WRAPPED);
|
2007-06-21 16:27:47 +07:00
|
|
|
if (!err)
|
2017-08-16 00:29:19 +07:00
|
|
|
memcpy(context, mailbox->buf + 8, sizeof(*context));
|
2007-06-21 16:27:47 +07:00
|
|
|
|
|
|
|
mlx4_free_cmd_mailbox(dev, mailbox);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_query);
|
|
|
|
|
2008-04-26 04:52:32 +07:00
|
|
|
int mlx4_qp_to_ready(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
|
|
|
|
struct mlx4_qp_context *context,
|
|
|
|
struct mlx4_qp *qp, enum mlx4_qp_state *qp_state)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
int i;
|
|
|
|
enum mlx4_qp_state states[] = {
|
|
|
|
MLX4_QP_STATE_RST,
|
|
|
|
MLX4_QP_STATE_INIT,
|
|
|
|
MLX4_QP_STATE_RTR,
|
|
|
|
MLX4_QP_STATE_RTS
|
|
|
|
};
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(states) - 1; i++) {
|
|
|
|
context->flags &= cpu_to_be32(~(0xf << 28));
|
|
|
|
context->flags |= cpu_to_be32(states[i + 1] << 28);
|
2015-02-03 21:48:33 +07:00
|
|
|
if (states[i + 1] != MLX4_QP_STATE_RTR)
|
2017-10-09 20:59:48 +07:00
|
|
|
context->params2 &= ~cpu_to_be32(MLX4_QP_BIT_FPP);
|
2008-04-26 04:52:32 +07:00
|
|
|
err = mlx4_qp_modify(dev, mtt, states[i], states[i + 1],
|
|
|
|
context, 0, 0, qp);
|
|
|
|
if (err) {
|
2014-05-08 02:52:57 +07:00
|
|
|
mlx4_err(dev, "Failed to bring QP to state: %d with error: %d\n",
|
2008-04-26 04:52:32 +07:00
|
|
|
states[i + 1], err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
*qp_state = states[i + 1];
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mlx4_qp_to_ready);
|
2016-01-14 22:50:37 +07:00
|
|
|
|
|
|
|
u16 mlx4_qp_roce_entropy(struct mlx4_dev *dev, u32 qpn)
|
|
|
|
{
|
|
|
|
struct mlx4_qp_context context;
|
|
|
|
struct mlx4_qp qp;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
qp.qpn = qpn;
|
|
|
|
err = mlx4_qp_query(dev, &qp, &context);
|
|
|
|
if (!err) {
|
|
|
|
u32 dest_qpn = be32_to_cpu(context.remote_qpn) & 0xffffff;
|
|
|
|
u16 folded_dst = folded_qp(dest_qpn);
|
|
|
|
u16 folded_src = folded_qp(qpn);
|
|
|
|
|
|
|
|
return (dest_qpn != qpn) ?
|
|
|
|
((folded_dst ^ folded_src) | 0xC000) :
|
|
|
|
folded_src | 0xC000;
|
|
|
|
}
|
|
|
|
return 0xdead;
|
|
|
|
}
|