mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-25 00:20:53 +07:00
3316f06311
The current Receive path uses an array of pages which are allocated and DMA mapped when each Receive WR is posted, and then handed off to the upper layer in rqstp::rq_arg. The page flip releases unused pages in the rq_pages pagelist. This mechanism introduces a significant amount of overhead. So instead, kmalloc the Receive buffer, and leave it DMA-mapped while the transport remains connected. This confers a number of benefits: * Each Receive WR requires only one receive SGE, no matter how large the inline threshold is. This helps the server-side NFS/RDMA transport operate on less capable RDMA devices. * The Receive buffer is left allocated and mapped all the time. This relieves svc_rdma_post_recv from the overhead of allocating and DMA-mapping a fresh buffer. * svc_rdma_wc_receive no longer has to DMA unmap the Receive buffer. It has to DMA sync only the number of bytes that were received. * svc_rdma_build_arg_xdr no longer has to free a page in rq_pages for each page in the Receive buffer, making it a constant-time function. * The Receive buffer is now plugged directly into the rq_arg's head[0].iov_vec, and can be larger than a page without spilling over into rq_arg's page list. This enables simplification of the RDMA Read path in subsequent patches. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> |
||
---|---|---|
.. | ||
auth_gss | ||
xprtrdma | ||
addr.c | ||
auth_generic.c | ||
auth_null.c | ||
auth_unix.c | ||
auth.c | ||
backchannel_rqst.c | ||
cache.c | ||
clnt.c | ||
debugfs.c | ||
Kconfig | ||
Makefile | ||
netns.h | ||
rpc_pipe.c | ||
rpcb_clnt.c | ||
sched.c | ||
socklib.c | ||
stats.c | ||
sunrpc_syms.c | ||
sunrpc.h | ||
svc_xprt.c | ||
svc.c | ||
svcauth_unix.c | ||
svcauth.c | ||
svcsock.c | ||
sysctl.c | ||
timer.c | ||
xdr.c | ||
xprt.c | ||
xprtmultipath.c | ||
xprtsock.c |