mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-22 15:33:55 +07:00
533d1daea8
Several subsystems depend on INFINIBAND_ADDR_TRANS, which in turn depends
on INFINIBAND. However, when with CONFIG_INIFIBAND=m, this leads to a
link error when another driver using it is built-in. The
INFINIBAND_ADDR_TRANS dependency is insufficient here as this is
a 'bool' symbol that does not force anything to be a module in turn.
fs/cifs/smbdirect.o: In function `smbd_disconnect_rdma_work':
smbdirect.c:(.text+0x1e4): undefined reference to `rdma_disconnect'
net/9p/trans_rdma.o: In function `rdma_request':
trans_rdma.c:(.text+0x7bc): undefined reference to `rdma_disconnect'
net/9p/trans_rdma.o: In function `rdma_destroy_trans':
trans_rdma.c:(.text+0x830): undefined reference to `ib_destroy_qp'
trans_rdma.c:(.text+0x858): undefined reference to `ib_dealloc_pd'
Fixes: 9533b292a7
("IB: remove redundant INFINIBAND kconfig dependencies")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Greg Thelen <gthelen@google.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
60 lines
1.7 KiB
Plaintext
60 lines
1.7 KiB
Plaintext
config NVME_CORE
|
|
tristate
|
|
|
|
config BLK_DEV_NVME
|
|
tristate "NVM Express block device"
|
|
depends on PCI && BLOCK
|
|
select NVME_CORE
|
|
---help---
|
|
The NVM Express driver is for solid state drives directly
|
|
connected to the PCI or PCI Express bus. If you know you
|
|
don't have one of these, it is safe to answer N.
|
|
|
|
To compile this driver as a module, choose M here: the
|
|
module will be called nvme.
|
|
|
|
config NVME_MULTIPATH
|
|
bool "NVMe multipath support"
|
|
depends on NVME_CORE
|
|
---help---
|
|
This option enables support for multipath access to NVMe
|
|
subsystems. If this option is enabled only a single
|
|
/dev/nvmeXnY device will show up for each NVMe namespaces,
|
|
even if it is accessible through multiple controllers.
|
|
|
|
config NVME_FABRICS
|
|
tristate
|
|
|
|
config NVME_RDMA
|
|
tristate "NVM Express over Fabrics RDMA host driver"
|
|
depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK
|
|
select NVME_CORE
|
|
select NVME_FABRICS
|
|
select SG_POOL
|
|
help
|
|
This provides support for the NVMe over Fabrics protocol using
|
|
the RDMA (Infiniband, RoCE, iWarp) transport. This allows you
|
|
to use remote block devices exported using the NVMe protocol set.
|
|
|
|
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
|
from https://github.com/linux-nvme/nvme-cli.
|
|
|
|
If unsure, say N.
|
|
|
|
config NVME_FC
|
|
tristate "NVM Express over Fabrics FC host driver"
|
|
depends on BLOCK
|
|
depends on HAS_DMA
|
|
select NVME_CORE
|
|
select NVME_FABRICS
|
|
select SG_POOL
|
|
help
|
|
This provides support for the NVMe over Fabrics protocol using
|
|
the FC transport. This allows you to use remote block devices
|
|
exported using the NVMe protocol set.
|
|
|
|
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
|
from https://github.com/linux-nvme/nvme-cli.
|
|
|
|
If unsure, say N.
|