2005-04-18 04:05:31 +07:00
|
|
|
/*******************************************************************
|
|
|
|
* This file is part of the Emulex Linux Device Driver for *
|
2005-06-25 21:34:39 +07:00
|
|
|
* Fibre Channel Host Bus Adapters. *
|
2019-01-29 02:14:41 +07:00
|
|
|
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
2018-06-26 22:24:31 +07:00
|
|
|
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
2016-07-07 02:36:13 +07:00
|
|
|
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
2005-06-25 21:34:39 +07:00
|
|
|
* EMULEX and SLI are trademarks of Emulex. *
|
2017-02-13 04:52:39 +07:00
|
|
|
* www.broadcom.com *
|
2005-04-18 04:05:31 +07:00
|
|
|
* *
|
|
|
|
* This program is free software; you can redistribute it and/or *
|
2005-06-25 21:34:39 +07:00
|
|
|
* modify it under the terms of version 2 of the GNU General *
|
|
|
|
* Public License as published by the Free Software Foundation. *
|
|
|
|
* This program is distributed in the hope that it will be useful. *
|
|
|
|
* ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND *
|
|
|
|
* WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, *
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE *
|
|
|
|
* DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
|
|
|
|
* TO BE LEGALLY INVALID. See the GNU General Public License for *
|
|
|
|
* more details, a copy of which can be found in the file COPYING *
|
|
|
|
* included with this package. *
|
2005-04-18 04:05:31 +07:00
|
|
|
*******************************************************************/
|
|
|
|
|
2008-08-25 08:50:30 +07:00
|
|
|
typedef int (*node_filter)(struct lpfc_nodelist *, void *);
|
2007-04-25 20:53:08 +07:00
|
|
|
|
2006-08-19 04:47:08 +07:00
|
|
|
struct fc_rport;
|
2017-02-13 04:52:30 +07:00
|
|
|
struct fc_frame_header;
|
Update ABORT processing for NVMET.
The driver with nvme had this routine stubbed.
Right now XRI_ABORTED_CQE is not handled and the FC NVMET
Transport has a new API for the driver.
Missing code path, new NVME abort API
Update ABORT processing for NVMET
There are 3 new FC NVMET Transport API/ template routines for NVMET:
lpfc_nvmet_xmt_fcp_release
This NVMET template callback routine called to release context
associated with an IO This routine is ALWAYS called last, even
if the IO was aborted or completed in error.
lpfc_nvmet_xmt_fcp_abort
This NVMET template callback routine called to abort an exchange that
has an IO in progress
nvmet_fc_rcv_fcp_req
When the lpfc driver receives an ABTS, this NVME FC transport layer
callback routine is called. For this case there are 2 paths thru the
driver: the driver either has an outstanding exchange / context for the
XRI to be aborted or not. If not, a BA_RJT is issued otherwise a BA_ACC
NVMET Driver abort paths:
There are 2 paths for aborting an IO. The first one is we receive an IO and
decide not to process it because of lack of resources. An unsolicated ABTS
is immediately sent back to the initiator as a response.
lpfc_nvmet_unsol_fcp_buffer
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
The second one is we sent the IO up to the NVMET transport layer to
process, and for some reason the NVME Transport layer decided to abort the
IO before it completes all its phases. For this case there are 2 paths
thru the driver:
the driver either has an outstanding TSEND/TRECEIVE/TRSP WQE or no
outstanding WQEs are present for the exchange / context.
lpfc_nvmet_xmt_fcp_abort
if (LPFC_NVMET_IO_INP)
lpfc_nvmet_sol_fcp_issue_abort (ABORT_WQE)
lpfc_nvmet_sol_fcp_abort_cmp
else
lpfc_nvmet_unsol_fcp_issue_abort
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
lpfc_nvmet_unsol_fcp_abort_cmp
Context flags:
LPFC_NVMET_IOP - his flag signifies an IO is in progress on the exchange.
LPFC_NVMET_XBUSY - this flag indicates the IO completed but the firmware
is still busy with the corresponding exchange. The exchange should not be
reused until after a XRI_ABORTED_CQE is received for that exchange.
LPFC_NVMET_ABORT_OP - this flag signifies an ABORT_WQE was issued on the
exchange.
LPFC_NVMET_CTX_RLS - this flag signifies a context free was requested,
but we are deferring it due to an XBUSY or ABORT in progress.
A ctxlock is added to the context structure that is used whenever these
flags are set/read within the context of an IO.
The LPFC_NVMET_CTX_RLS flag is only set in the defer_relase routine when
the transport has resolved all IO associated with the buffer. The flag is
cleared when the CTX is associated with a new IO.
An exchange can has both an LPFC_NVMET_XBUSY and a LPFC_NVMET_ABORT_OP
condition active simultaneously. Both conditions must complete before the
exchange is freed.
When the abort callback (lpfc_nvmet_xmt_fcp_abort) is envoked:
If there is an outstanding IO, the driver will issue an ABORT_WQE. This
should result in 3 completions for the exchange:
1) IO cmpl with XB bit set
2) Abort WQE cmpl
3) XRI_ABORTED_CQE cmpl
For this scenerio, after completion #1, the NVMET Transport IO rsp
callback is called. After completion #2, no action is taken with respect
to the exchange / context. After completion #3, the exchange context is
free for re-use on another IO.
If there is no outstanding activity on the exchange, the driver will send a
ABTS to the Initiator. Upon completion of this WQE, the exchange / context
is freed for re-use on another IO.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
2017-04-22 06:05:04 +07:00
|
|
|
struct lpfc_nvmet_rcv_ctx;
|
2009-07-19 21:01:10 +07:00
|
|
|
void lpfc_down_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|
|
|
void lpfc_sli_read_link_ste(struct lpfc_hba *);
|
|
|
|
void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t, uint16_t);
|
2008-12-05 10:39:19 +07:00
|
|
|
void lpfc_dump_wakeup_param(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-07-19 21:01:26 +07:00
|
|
|
int lpfc_dump_static_vport(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t);
|
2011-12-14 01:20:45 +07:00
|
|
|
int lpfc_sli4_dump_cfg_rg23(struct lpfc_hba *, struct lpfcMboxq *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_read_nv(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2007-10-28 00:37:05 +07:00
|
|
|
void lpfc_config_async(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
|
|
|
|
|
2007-06-18 07:56:39 +07:00
|
|
|
void lpfc_heart_beat(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2010-11-21 11:11:48 +07:00
|
|
|
int lpfc_read_topology(struct lpfc_hba *, LPFC_MBOXQ_t *, struct lpfc_dmabuf *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2008-08-25 08:50:30 +07:00
|
|
|
void lpfc_issue_clear_la(struct lpfc_hba *, struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2008-08-25 08:50:30 +07:00
|
|
|
int lpfc_config_msi(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *, int);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_read_config(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2006-03-01 07:25:15 +07:00
|
|
|
void lpfc_read_lnk_stat(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_reg_rpi(struct lpfc_hba *, uint16_t, uint32_t, uint8_t *,
|
2010-12-16 05:58:10 +07:00
|
|
|
LPFC_MBOXQ_t *, uint16_t);
|
2010-06-08 02:23:17 +07:00
|
|
|
void lpfc_set_var(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_unreg_login(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
|
|
|
|
void lpfc_unreg_did(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
|
2010-10-22 22:06:38 +07:00
|
|
|
void lpfc_sli4_unreg_all_rpis(struct lpfc_vport *);
|
|
|
|
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_reg_vpi(struct lpfc_vport *, LPFC_MBOXQ_t *);
|
2010-01-27 11:08:03 +07:00
|
|
|
void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
|
|
|
|
struct lpfc_nodelist *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_unreg_vpi(struct lpfc_hba *, uint16_t, LPFC_MBOXQ_t *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_request_features(struct lpfc_hba *, struct lpfcMboxq *);
|
2010-02-13 02:42:03 +07:00
|
|
|
void lpfc_supported_pages(struct lpfcMboxq *);
|
2011-02-17 00:39:24 +07:00
|
|
|
void lpfc_pc_sli4_params(struct lpfcMboxq *);
|
2010-02-13 02:42:03 +07:00
|
|
|
int lpfc_pc_sli4_params_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2011-05-24 22:44:12 +07:00
|
|
|
int lpfc_sli4_mbox_rsrc_extent(struct lpfc_hba *, struct lpfcMboxq *,
|
|
|
|
uint16_t, uint16_t, bool);
|
2011-02-17 00:39:24 +07:00
|
|
|
int lpfc_get_sli4_parameters(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2007-08-02 22:09:51 +07:00
|
|
|
struct lpfc_vport *lpfc_find_vport_by_did(struct lpfc_hba *, uint32_t);
|
2009-10-03 02:17:02 +07:00
|
|
|
void lpfc_cleanup_rcv_buffers(struct lpfc_vport *);
|
|
|
|
void lpfc_rcv_seq_check_edtov(struct lpfc_vport *);
|
2008-08-25 08:50:30 +07:00
|
|
|
void lpfc_cleanup_rpis(struct lpfc_vport *, int);
|
2010-01-27 11:08:03 +07:00
|
|
|
void lpfc_cleanup_pending_mbox(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_linkdown(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_linkdown_port(struct lpfc_vport *);
|
2007-10-28 00:37:43 +07:00
|
|
|
void lpfc_port_link_failure(struct lpfc_vport *);
|
2010-11-21 11:11:48 +07:00
|
|
|
void lpfc_mbx_cmpl_read_topology(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2010-01-27 11:08:03 +07:00
|
|
|
void lpfc_init_vpi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2010-02-27 02:15:57 +07:00
|
|
|
void lpfc_cancel_all_vport_retry_delay_timer(struct lpfc_hba *);
|
2010-01-27 11:08:03 +07:00
|
|
|
void lpfc_retry_pport_discovery(struct lpfc_hba *);
|
2017-05-16 05:20:45 +07:00
|
|
|
int lpfc_init_iocb_list(struct lpfc_hba *phba, int cnt);
|
|
|
|
void lpfc_free_iocb_list(struct lpfc_hba *phba);
|
2017-05-16 05:20:46 +07:00
|
|
|
int lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq,
|
|
|
|
struct lpfc_queue *drq, int count, int idx);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2015-12-17 06:11:53 +07:00
|
|
|
void lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2007-06-18 07:56:39 +07:00
|
|
|
void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|
|
|
void lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|
|
|
void lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2011-12-14 01:23:09 +07:00
|
|
|
void lpfc_unregister_vfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2008-02-09 06:49:26 +07:00
|
|
|
void lpfc_enqueue_node(struct lpfc_vport *, struct lpfc_nodelist *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_dequeue_node(struct lpfc_vport *, struct lpfc_nodelist *);
|
2008-02-09 06:49:26 +07:00
|
|
|
struct lpfc_nodelist *lpfc_enable_node(struct lpfc_vport *,
|
|
|
|
struct lpfc_nodelist *, int);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_nlp_set_state(struct lpfc_vport *, struct lpfc_nodelist *, int);
|
|
|
|
void lpfc_drop_node(struct lpfc_vport *, struct lpfc_nodelist *);
|
|
|
|
void lpfc_set_disctmo(struct lpfc_vport *);
|
|
|
|
int lpfc_can_disctmo(struct lpfc_vport *);
|
|
|
|
int lpfc_unreg_rpi(struct lpfc_vport *, struct lpfc_nodelist *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_unreg_all_rpis(struct lpfc_vport *);
|
2010-02-13 02:41:27 +07:00
|
|
|
void lpfc_unreg_hba_rpis(struct lpfc_hba *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_unreg_default_rpis(struct lpfc_vport *);
|
|
|
|
void lpfc_issue_reg_vpi(struct lpfc_hba *, struct lpfc_vport *);
|
|
|
|
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_check_sli_ndlp(struct lpfc_hba *, struct lpfc_sli_ring *,
|
2007-06-18 07:56:36 +07:00
|
|
|
struct lpfc_iocbq *, struct lpfc_nodelist *);
|
2017-04-22 06:05:00 +07:00
|
|
|
struct lpfc_nodelist *lpfc_nlp_init(struct lpfc_vport *vport, uint32_t did);
|
2007-04-25 20:53:01 +07:00
|
|
|
struct lpfc_nodelist *lpfc_nlp_get(struct lpfc_nodelist *);
|
|
|
|
int lpfc_nlp_put(struct lpfc_nodelist *);
|
2007-10-28 00:37:33 +07:00
|
|
|
int lpfc_nlp_not_used(struct lpfc_nodelist *ndlp);
|
2007-06-18 07:56:36 +07:00
|
|
|
struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t);
|
|
|
|
void lpfc_disc_list_loopmap(struct lpfc_vport *);
|
|
|
|
void lpfc_disc_start(struct lpfc_vport *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
|
2007-10-28 00:37:43 +07:00
|
|
|
void lpfc_cleanup(struct lpfc_vport *);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_disc_timeout(struct timer_list *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2013-03-02 04:35:38 +07:00
|
|
|
int lpfc_unregister_fcf_prep(struct lpfc_hba *);
|
2007-06-18 07:56:36 +07:00
|
|
|
struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
|
2011-12-14 01:21:57 +07:00
|
|
|
struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_worker_wake_up(struct lpfc_hba *);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_workq_post_event(struct lpfc_hba *, void *, void *, uint32_t);
|
|
|
|
int lpfc_do_work(void *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_disc_state_machine(struct lpfc_vport *, struct lpfc_nodelist *, void *,
|
2005-04-18 04:05:31 +07:00
|
|
|
uint32_t);
|
|
|
|
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_do_scr_ns_plogi(struct lpfc_hba *, struct lpfc_vport *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_check_sparm(struct lpfc_vport *, struct lpfc_nodelist *,
|
2010-01-27 11:07:37 +07:00
|
|
|
struct serv_parm *, uint32_t, int);
|
2017-05-25 04:48:51 +07:00
|
|
|
void lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist *);
|
2007-10-28 00:37:43 +07:00
|
|
|
void lpfc_more_plogi(struct lpfc_vport *);
|
2008-01-11 13:52:36 +07:00
|
|
|
void lpfc_more_adisc(struct lpfc_vport *);
|
2007-10-28 00:37:43 +07:00
|
|
|
void lpfc_end_rscn(struct lpfc_vport *);
|
2007-06-18 07:56:39 +07:00
|
|
|
int lpfc_els_chk_latt(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_els_abort_flogi(struct lpfc_hba *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_initial_flogi(struct lpfc_vport *);
|
2010-11-21 11:11:48 +07:00
|
|
|
void lpfc_issue_init_vfi(struct lpfc_vport *);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_initial_fdisc(struct lpfc_vport *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_issue_els_plogi(struct lpfc_vport *, uint32_t, uint8_t);
|
|
|
|
int lpfc_issue_els_prli(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
|
|
|
|
int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
|
|
|
|
int lpfc_issue_els_logo(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_issue_els_npiv_logo(struct lpfc_vport *, struct lpfc_nodelist *);
|
2020-02-11 00:31:55 +07:00
|
|
|
int lpfc_issue_els_scr(struct lpfc_vport *vport, uint8_t retry);
|
2019-05-15 04:58:05 +07:00
|
|
|
int lpfc_issue_els_rscn(struct lpfc_vport *vport, uint8_t retry);
|
2009-05-23 01:52:59 +07:00
|
|
|
int lpfc_issue_fabric_reglogin(struct lpfc_vport *);
|
2020-02-11 00:31:55 +07:00
|
|
|
int lpfc_issue_els_rdf(struct lpfc_vport *vport, uint8_t retry);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_els_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
|
2007-06-18 07:56:39 +07:00
|
|
|
int lpfc_ct_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_els_rsp_acc(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
|
2007-08-02 22:10:31 +07:00
|
|
|
struct lpfc_nodelist *, LPFC_MBOXQ_t *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_els_rsp_reject(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
|
2007-06-18 07:56:39 +07:00
|
|
|
struct lpfc_nodelist *, LPFC_MBOXQ_t *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_els_rsp_adisc_acc(struct lpfc_vport *, struct lpfc_iocbq *,
|
2005-04-18 04:05:31 +07:00
|
|
|
struct lpfc_nodelist *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_els_rsp_prli_acc(struct lpfc_vport *, struct lpfc_iocbq *,
|
2005-04-18 04:05:31 +07:00
|
|
|
struct lpfc_nodelist *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_cancel_retry_delay_tmo(struct lpfc_vport *, struct lpfc_nodelist *);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_els_retry_delay(struct timer_list *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_els_retry_delay_handler(struct lpfc_nodelist *);
|
|
|
|
void lpfc_els_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
|
|
|
|
struct lpfc_iocbq *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_els_handle_rscn(struct lpfc_vport *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_els_flush_rscn(struct lpfc_vport *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_rscn_payload_check(struct lpfc_vport *, uint32_t);
|
2007-08-02 22:09:51 +07:00
|
|
|
void lpfc_els_flush_all_cmd(struct lpfc_hba *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_els_flush_cmd(struct lpfc_vport *);
|
|
|
|
int lpfc_els_disc_adisc(struct lpfc_vport *);
|
|
|
|
int lpfc_els_disc_plogi(struct lpfc_vport *);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_els_timeout(struct timer_list *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_els_timeout_handler(struct lpfc_vport *);
|
2009-07-19 21:01:32 +07:00
|
|
|
struct lpfc_iocbq *lpfc_prep_els_iocb(struct lpfc_vport *, uint8_t, uint16_t,
|
|
|
|
uint8_t, struct lpfc_nodelist *,
|
|
|
|
uint32_t, uint32_t);
|
2007-06-18 07:56:39 +07:00
|
|
|
void lpfc_hb_timeout_handler(struct lpfc_hba *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
|
|
|
void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
|
|
|
|
struct lpfc_iocbq *);
|
2013-01-04 03:43:37 +07:00
|
|
|
int lpfc_ct_handle_unsol_abort(struct lpfc_hba *, struct hbq_dmabuf *);
|
2018-10-24 03:41:10 +07:00
|
|
|
int lpfc_issue_gidpt(struct lpfc_vport *vport);
|
2017-02-13 04:52:30 +07:00
|
|
|
int lpfc_issue_gidft(struct lpfc_vport *vport);
|
|
|
|
int lpfc_get_gidft_type(struct lpfc_vport *vport, struct lpfc_iocbq *iocbq);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_ns_cmd(struct lpfc_vport *, int, uint8_t, uint32_t);
|
2015-12-17 06:11:58 +07:00
|
|
|
int lpfc_fdmi_cmd(struct lpfc_vport *, struct lpfc_nodelist *, int, uint32_t);
|
2019-12-19 06:58:02 +07:00
|
|
|
void lpfc_fdmi_change_check(struct lpfc_vport *vport);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_delayed_disc_tmo(struct timer_list *);
|
2011-02-17 00:39:44 +07:00
|
|
|
void lpfc_delayed_disc_timeout_handler(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
|
|
|
int lpfc_config_port_prep(struct lpfc_hba *);
|
2011-05-24 22:40:48 +07:00
|
|
|
void lpfc_update_vport_wwn(struct lpfc_vport *vport);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_config_port_post(struct lpfc_hba *);
|
|
|
|
int lpfc_hba_down_prep(struct lpfc_hba *);
|
2006-03-01 07:25:27 +07:00
|
|
|
int lpfc_hba_down_post(struct lpfc_hba *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_hba_init(struct lpfc_hba *, uint32_t *);
|
2008-06-15 09:52:59 +07:00
|
|
|
int lpfc_post_buffer(struct lpfc_hba *, struct lpfc_sli_ring *, int);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_decode_firmware_rev(struct lpfc_hba *, char *, int);
|
|
|
|
int lpfc_online(struct lpfc_hba *);
|
2007-04-25 20:51:45 +07:00
|
|
|
void lpfc_unblock_mgmt_io(struct lpfc_hba *);
|
2012-06-13 00:54:36 +07:00
|
|
|
void lpfc_offline_prep(struct lpfc_hba *, int);
|
2007-04-25 20:51:45 +07:00
|
|
|
void lpfc_offline(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_reset_hba(struct lpfc_hba *);
|
2017-02-13 04:52:30 +07:00
|
|
|
int lpfc_emptyq_wait(struct lpfc_hba *phba, struct list_head *hd,
|
|
|
|
spinlock_t *slock);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
|
|
|
int lpfc_sli_setup(struct lpfc_hba *);
|
2017-02-13 04:52:30 +07:00
|
|
|
int lpfc_sli4_setup(struct lpfc_hba *phba);
|
|
|
|
void lpfc_sli_queue_init(struct lpfc_hba *phba);
|
|
|
|
void lpfc_sli4_queue_init(struct lpfc_hba *phba);
|
|
|
|
struct lpfc_sli_ring *lpfc_sli4_calc_ring(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_iocbq *iocbq);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
|
|
|
void lpfc_handle_eratt(struct lpfc_hba *);
|
|
|
|
void lpfc_handle_latt(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
irqreturn_t lpfc_sli_intr_handler(int, void *);
|
|
|
|
irqreturn_t lpfc_sli_sp_intr_handler(int, void *);
|
|
|
|
irqreturn_t lpfc_sli_fp_intr_handler(int, void *);
|
|
|
|
irqreturn_t lpfc_sli4_intr_handler(int, void *);
|
2012-08-03 23:36:13 +07:00
|
|
|
irqreturn_t lpfc_sli4_hba_intr_handler(int, void *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2019-11-12 06:03:58 +07:00
|
|
|
void lpfc_sli4_cleanup_poll_list(struct lpfc_hba *phba);
|
2019-11-05 07:57:05 +07:00
|
|
|
int lpfc_sli4_poll_eq(struct lpfc_queue *q, uint8_t path);
|
|
|
|
void lpfc_sli4_poll_hbtimer(struct timer_list *t);
|
|
|
|
void lpfc_sli4_start_polling(struct lpfc_queue *q);
|
|
|
|
void lpfc_sli4_stop_polling(struct lpfc_queue *q);
|
|
|
|
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_read_rev(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2010-06-09 05:31:54 +07:00
|
|
|
void lpfc_sli4_swap_str(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_config_ring(struct lpfc_hba *, int, LPFC_MBOXQ_t *);
|
|
|
|
void lpfc_config_port(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2006-03-01 07:25:27 +07:00
|
|
|
void lpfc_kill_board(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mbox_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|
|
|
LPFC_MBOXQ_t *lpfc_mbox_get(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void __lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_mbox_cmd_check(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|
|
|
int lpfc_mbox_dev_check(struct lpfc_hba *);
|
2011-10-11 08:32:43 +07:00
|
|
|
int lpfc_mbox_tmo_val(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_init_vfi(struct lpfcMboxq *, struct lpfc_vport *);
|
|
|
|
void lpfc_reg_vfi(struct lpfcMboxq *, struct lpfc_vport *, dma_addr_t);
|
2009-07-19 21:01:26 +07:00
|
|
|
void lpfc_init_vpi(struct lpfc_hba *, struct lpfcMboxq *, uint16_t);
|
2009-10-03 02:16:45 +07:00
|
|
|
void lpfc_unreg_vfi(struct lpfcMboxq *, struct lpfc_vport *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_reg_fcfi(struct lpfc_hba *, struct lpfcMboxq *);
|
2017-02-13 04:52:35 +07:00
|
|
|
void lpfc_reg_fcfi_mrq(struct lpfc_hba *phba, struct lpfcMboxq *mbox, int mode);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_unreg_fcfi(struct lpfcMboxq *, uint16_t);
|
|
|
|
void lpfc_resume_rpi(struct lpfcMboxq *, struct lpfc_nodelist *);
|
2009-07-19 21:01:21 +07:00
|
|
|
int lpfc_check_pending_fcoe_event(struct lpfc_hba *, uint8_t);
|
2010-02-13 02:41:27 +07:00
|
|
|
void lpfc_issue_init_vpi(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2007-08-02 22:10:31 +07:00
|
|
|
void lpfc_config_hbq(struct lpfc_hba *, uint32_t, struct lpfc_hbq_init *,
|
|
|
|
uint32_t , LPFC_MBOXQ_t *);
|
|
|
|
struct hbq_dmabuf *lpfc_els_hbq_alloc(struct lpfc_hba *);
|
|
|
|
void lpfc_els_hbq_free(struct lpfc_hba *, struct hbq_dmabuf *);
|
2009-05-23 01:51:39 +07:00
|
|
|
struct hbq_dmabuf *lpfc_sli4_rb_alloc(struct lpfc_hba *);
|
|
|
|
void lpfc_sli4_rb_free(struct lpfc_hba *, struct hbq_dmabuf *);
|
2017-02-13 04:52:34 +07:00
|
|
|
struct rqb_dmabuf *lpfc_sli4_nvmet_alloc(struct lpfc_hba *phba);
|
|
|
|
void lpfc_sli4_nvmet_free(struct lpfc_hba *phba, struct rqb_dmabuf *dmab);
|
2017-05-16 05:20:45 +07:00
|
|
|
void lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_nvmet_ctxbuf *ctxp);
|
Update ABORT processing for NVMET.
The driver with nvme had this routine stubbed.
Right now XRI_ABORTED_CQE is not handled and the FC NVMET
Transport has a new API for the driver.
Missing code path, new NVME abort API
Update ABORT processing for NVMET
There are 3 new FC NVMET Transport API/ template routines for NVMET:
lpfc_nvmet_xmt_fcp_release
This NVMET template callback routine called to release context
associated with an IO This routine is ALWAYS called last, even
if the IO was aborted or completed in error.
lpfc_nvmet_xmt_fcp_abort
This NVMET template callback routine called to abort an exchange that
has an IO in progress
nvmet_fc_rcv_fcp_req
When the lpfc driver receives an ABTS, this NVME FC transport layer
callback routine is called. For this case there are 2 paths thru the
driver: the driver either has an outstanding exchange / context for the
XRI to be aborted or not. If not, a BA_RJT is issued otherwise a BA_ACC
NVMET Driver abort paths:
There are 2 paths for aborting an IO. The first one is we receive an IO and
decide not to process it because of lack of resources. An unsolicated ABTS
is immediately sent back to the initiator as a response.
lpfc_nvmet_unsol_fcp_buffer
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
The second one is we sent the IO up to the NVMET transport layer to
process, and for some reason the NVME Transport layer decided to abort the
IO before it completes all its phases. For this case there are 2 paths
thru the driver:
the driver either has an outstanding TSEND/TRECEIVE/TRSP WQE or no
outstanding WQEs are present for the exchange / context.
lpfc_nvmet_xmt_fcp_abort
if (LPFC_NVMET_IO_INP)
lpfc_nvmet_sol_fcp_issue_abort (ABORT_WQE)
lpfc_nvmet_sol_fcp_abort_cmp
else
lpfc_nvmet_unsol_fcp_issue_abort
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
lpfc_nvmet_unsol_fcp_abort_cmp
Context flags:
LPFC_NVMET_IOP - his flag signifies an IO is in progress on the exchange.
LPFC_NVMET_XBUSY - this flag indicates the IO completed but the firmware
is still busy with the corresponding exchange. The exchange should not be
reused until after a XRI_ABORTED_CQE is received for that exchange.
LPFC_NVMET_ABORT_OP - this flag signifies an ABORT_WQE was issued on the
exchange.
LPFC_NVMET_CTX_RLS - this flag signifies a context free was requested,
but we are deferring it due to an XBUSY or ABORT in progress.
A ctxlock is added to the context structure that is used whenever these
flags are set/read within the context of an IO.
The LPFC_NVMET_CTX_RLS flag is only set in the defer_relase routine when
the transport has resolved all IO associated with the buffer. The flag is
cleared when the CTX is associated with a new IO.
An exchange can has both an LPFC_NVMET_XBUSY and a LPFC_NVMET_ABORT_OP
condition active simultaneously. Both conditions must complete before the
exchange is freed.
When the abort callback (lpfc_nvmet_xmt_fcp_abort) is envoked:
If there is an outstanding IO, the driver will issue an ABORT_WQE. This
should result in 3 completions for the exchange:
1) IO cmpl with XB bit set
2) Abort WQE cmpl
3) XRI_ABORTED_CQE cmpl
For this scenerio, after completion #1, the NVMET Transport IO rsp
callback is called. After completion #2, no action is taken with respect
to the exchange / context. After completion #3, the exchange context is
free for re-use on another IO.
If there is no outstanding activity on the exchange, the driver will send a
ABTS to the Initiator. Upon completion of this WQE, the exchange / context
is freed for re-use on another IO.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
2017-04-22 06:05:04 +07:00
|
|
|
int lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
|
|
|
|
struct fc_frame_header *fc_hdr);
|
2018-01-31 06:58:49 +07:00
|
|
|
void lpfc_nvmet_wqfull_process(struct lpfc_hba *phba, struct lpfc_queue *wq);
|
2017-11-21 07:00:42 +07:00
|
|
|
void lpfc_sli_flush_nvme_rings(struct lpfc_hba *phba);
|
|
|
|
void lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_sli4_build_dflt_fcf_record(struct lpfc_hba *, struct fcf_record *,
|
|
|
|
uint16_t);
|
2017-02-13 04:52:30 +07:00
|
|
|
int lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq,
|
|
|
|
struct lpfc_rqe *hrqe, struct lpfc_rqe *drqe);
|
|
|
|
int lpfc_free_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hq);
|
2010-02-13 02:41:27 +07:00
|
|
|
void lpfc_unregister_fcf(struct lpfc_hba *);
|
|
|
|
void lpfc_unregister_fcf_rescan(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_unregister_unused_fcf(struct lpfc_hba *);
|
2010-02-13 02:41:27 +07:00
|
|
|
int lpfc_sli4_redisc_fcf_table(struct lpfc_hba *);
|
|
|
|
void lpfc_fcf_redisc_wait_start_timer(struct lpfc_hba *);
|
2010-02-27 02:15:29 +07:00
|
|
|
void lpfc_sli4_fcf_dead_failthrough(struct lpfc_hba *);
|
2010-02-27 02:15:57 +07:00
|
|
|
uint16_t lpfc_sli4_fcf_rr_next_index_get(struct lpfc_hba *);
|
2011-07-23 05:37:52 +07:00
|
|
|
void lpfc_sli4_set_fcf_flogi_fail(struct lpfc_hba *, uint16_t);
|
2010-02-27 02:15:57 +07:00
|
|
|
int lpfc_sli4_fcf_rr_index_set(struct lpfc_hba *, uint16_t);
|
|
|
|
void lpfc_sli4_fcf_rr_index_clear(struct lpfc_hba *, uint16_t);
|
2010-10-22 22:06:08 +07:00
|
|
|
int lpfc_sli4_fcf_rr_next_proc(struct lpfc_vport *, uint16_t);
|
2011-07-23 05:37:52 +07:00
|
|
|
void lpfc_sli4_clear_fcf_rr_bmask(struct lpfc_hba *);
|
2007-06-18 07:56:37 +07:00
|
|
|
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_mem_alloc(struct lpfc_hba *, int align);
|
2017-05-16 05:20:44 +07:00
|
|
|
int lpfc_nvmet_mem_alloc(struct lpfc_hba *phba);
|
2013-12-18 08:29:47 +07:00
|
|
|
int lpfc_mem_alloc_active_rrq_pool_s4(struct lpfc_hba *);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mem_free(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_mem_free_all(struct lpfc_hba *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_stop_vport_timers(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_poll_timeout(struct timer_list *t);
|
2008-08-25 08:50:30 +07:00
|
|
|
void lpfc_poll_start_timer(struct lpfc_hba *);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_poll_eratt(struct timer_list *);
|
2009-10-03 02:17:02 +07:00
|
|
|
int
|
|
|
|
lpfc_sli_handle_fast_ring_event(struct lpfc_hba *,
|
|
|
|
struct lpfc_sli_ring *, uint32_t);
|
|
|
|
|
2012-05-10 08:17:07 +07:00
|
|
|
struct lpfc_iocbq *__lpfc_sli_get_iocbq(struct lpfc_hba *);
|
2005-10-29 07:30:02 +07:00
|
|
|
struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
|
2008-08-25 08:50:30 +07:00
|
|
|
void lpfc_sli_release_iocbq(struct lpfc_hba *, struct lpfc_iocbq *);
|
|
|
|
uint16_t lpfc_sli_next_iotag(struct lpfc_hba *, struct lpfc_iocbq *);
|
2009-04-07 05:48:10 +07:00
|
|
|
void lpfc_sli_cancel_iocbs(struct lpfc_hba *, struct list_head *, uint32_t,
|
|
|
|
uint32_t);
|
2009-05-23 01:52:52 +07:00
|
|
|
void lpfc_sli_wake_mbox_wait(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2011-03-12 04:05:52 +07:00
|
|
|
int lpfc_selective_reset(struct lpfc_hba *);
|
|
|
|
void lpfc_reset_barrier(struct lpfc_hba *);
|
2006-03-01 07:25:27 +07:00
|
|
|
int lpfc_sli_brdready(struct lpfc_hba *, uint32_t);
|
|
|
|
int lpfc_sli_brdkill(struct lpfc_hba *);
|
scsi: lpfc: Fix panic on BFS configuration
To select the appropriate shost template, the driver is issuing a
mailbox command to retrieve the wwn. Turns out the sending of the
command precedes the reset of the function. On SLI-4 adapters, this is
inconsequential as the mailbox command location is specified by dma via
the BMBX register. However, on SLI-3 adapters, the location of the
mailbox command submission area changes. When the function is first
powered on or reset, the cmd is submitted via PCI bar memory. Later the
driver changes the function config to use host memory and DMA. The
request to start a mailbox command is the same, a simple doorbell write,
regardless of submission area. So.. if there has not been a boot driver
run against the adapter, the mailbox command works as defaults are
ok. But, if the boot driver has configured the card and, and if no
platform pci function/slot reset occurs as the os starts, the mailbox
command will fail. The SLI-3 device will use the stale boot driver dma
location. This can cause PCI eeh errors.
Fix is to reset the sli-3 function before sending the mailbox command,
thus synchronizing the function/driver on mailbox location.
Note: The fix uses routines that are typically invoked later in the call
flow to reset the sli-3 device. The issue in using those routines is
that the normal (non-fix) flow does additional initialization, namely
the allocation of the pport structure. So, rather than significantly
reworking the initialization flow so that the pport is alloc'd first,
pointer checks are added to work around it. Checks are limited to the
routines invoked by a sli-3 adapter (s3 routines) as this fix/early call
is only invoked on a sli3 adapter. Nothing changes post the
fix. Subsequent initialization, and another adapter reset, still occur -
both on sli-3 and sli-4 adapters.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Fixes: 96418b5e2c88 ("scsi: lpfc: Fix eh_deadline setting for sli3 adapters.")
Cc: stable@vger.kernel.org # v4.11+
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-04-28 05:08:26 +07:00
|
|
|
int lpfc_sli_chipset_init(struct lpfc_hba *phba);
|
2006-03-01 07:25:27 +07:00
|
|
|
int lpfc_sli_brdreset(struct lpfc_hba *);
|
|
|
|
int lpfc_sli_brdrestart(struct lpfc_hba *);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_sli_hba_setup(struct lpfc_hba *);
|
2008-08-25 08:50:30 +07:00
|
|
|
int lpfc_sli_config_port(struct lpfc_hba *, int);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_sli_host_down(struct lpfc_vport *);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_sli_hba_down(struct lpfc_hba *);
|
|
|
|
int lpfc_sli_issue_mbox(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
|
|
|
|
int lpfc_sli_handle_mb_event(struct lpfc_hba *);
|
2012-06-13 00:54:36 +07:00
|
|
|
void lpfc_sli_mbox_sys_shutdown(struct lpfc_hba *, int);
|
2008-08-25 08:50:30 +07:00
|
|
|
int lpfc_sli_check_eratt(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_sli_handle_slow_ring_event(struct lpfc_hba *,
|
2005-04-18 04:05:31 +07:00
|
|
|
struct lpfc_sli_ring *, uint32_t);
|
2009-10-03 02:16:39 +07:00
|
|
|
void lpfc_sli4_handle_received_buffer(struct lpfc_hba *, struct hbq_dmabuf *);
|
Update ABORT processing for NVMET.
The driver with nvme had this routine stubbed.
Right now XRI_ABORTED_CQE is not handled and the FC NVMET
Transport has a new API for the driver.
Missing code path, new NVME abort API
Update ABORT processing for NVMET
There are 3 new FC NVMET Transport API/ template routines for NVMET:
lpfc_nvmet_xmt_fcp_release
This NVMET template callback routine called to release context
associated with an IO This routine is ALWAYS called last, even
if the IO was aborted or completed in error.
lpfc_nvmet_xmt_fcp_abort
This NVMET template callback routine called to abort an exchange that
has an IO in progress
nvmet_fc_rcv_fcp_req
When the lpfc driver receives an ABTS, this NVME FC transport layer
callback routine is called. For this case there are 2 paths thru the
driver: the driver either has an outstanding exchange / context for the
XRI to be aborted or not. If not, a BA_RJT is issued otherwise a BA_ACC
NVMET Driver abort paths:
There are 2 paths for aborting an IO. The first one is we receive an IO and
decide not to process it because of lack of resources. An unsolicated ABTS
is immediately sent back to the initiator as a response.
lpfc_nvmet_unsol_fcp_buffer
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
The second one is we sent the IO up to the NVMET transport layer to
process, and for some reason the NVME Transport layer decided to abort the
IO before it completes all its phases. For this case there are 2 paths
thru the driver:
the driver either has an outstanding TSEND/TRECEIVE/TRSP WQE or no
outstanding WQEs are present for the exchange / context.
lpfc_nvmet_xmt_fcp_abort
if (LPFC_NVMET_IO_INP)
lpfc_nvmet_sol_fcp_issue_abort (ABORT_WQE)
lpfc_nvmet_sol_fcp_abort_cmp
else
lpfc_nvmet_unsol_fcp_issue_abort
lpfc_nvmet_unsol_issue_abort (XMIT_SEQUENCE_WQE)
lpfc_nvmet_unsol_fcp_abort_cmp
Context flags:
LPFC_NVMET_IOP - his flag signifies an IO is in progress on the exchange.
LPFC_NVMET_XBUSY - this flag indicates the IO completed but the firmware
is still busy with the corresponding exchange. The exchange should not be
reused until after a XRI_ABORTED_CQE is received for that exchange.
LPFC_NVMET_ABORT_OP - this flag signifies an ABORT_WQE was issued on the
exchange.
LPFC_NVMET_CTX_RLS - this flag signifies a context free was requested,
but we are deferring it due to an XBUSY or ABORT in progress.
A ctxlock is added to the context structure that is used whenever these
flags are set/read within the context of an IO.
The LPFC_NVMET_CTX_RLS flag is only set in the defer_relase routine when
the transport has resolved all IO associated with the buffer. The flag is
cleared when the CTX is associated with a new IO.
An exchange can has both an LPFC_NVMET_XBUSY and a LPFC_NVMET_ABORT_OP
condition active simultaneously. Both conditions must complete before the
exchange is freed.
When the abort callback (lpfc_nvmet_xmt_fcp_abort) is envoked:
If there is an outstanding IO, the driver will issue an ABORT_WQE. This
should result in 3 completions for the exchange:
1) IO cmpl with XB bit set
2) Abort WQE cmpl
3) XRI_ABORTED_CQE cmpl
For this scenerio, after completion #1, the NVMET Transport IO rsp
callback is called. After completion #2, no action is taken with respect
to the exchange / context. After completion #3, the exchange context is
free for re-use on another IO.
If there is no outstanding activity on the exchange, the driver will send a
ABTS to the Initiator. Upon completion of this WQE, the exchange / context
is freed for re-use on another IO.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
2017-04-22 06:05:04 +07:00
|
|
|
void lpfc_sli4_seq_abort_rsp(struct lpfc_vport *vport,
|
|
|
|
struct fc_frame_header *fc_hdr, bool aborted);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_sli_def_mbox_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2015-04-08 02:07:22 +07:00
|
|
|
void lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_sli_issue_iocb(struct lpfc_hba *, uint32_t,
|
2005-04-18 04:05:31 +07:00
|
|
|
struct lpfc_iocbq *, uint32_t);
|
2019-01-29 02:14:26 +07:00
|
|
|
int lpfc_sli4_issue_wqe(struct lpfc_hba *phba, struct lpfc_sli4_hdw_queue *qp,
|
|
|
|
struct lpfc_iocbq *pwqe);
|
2017-02-13 04:52:30 +07:00
|
|
|
struct lpfc_sglq *__lpfc_clear_active_sglq(struct lpfc_hba *phba, uint16_t xri);
|
2017-02-13 04:52:34 +07:00
|
|
|
struct lpfc_sglq *__lpfc_sli_get_nvmet_sglq(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_iocbq *piocbq);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_sli_pcimem_bcopy(void *, void *, uint32_t);
|
2009-07-19 21:01:10 +07:00
|
|
|
void lpfc_sli_bemem_bcopy(void *, void *, uint32_t);
|
2007-06-18 07:56:36 +07:00
|
|
|
void lpfc_sli_abort_iocb_ring(struct lpfc_hba *, struct lpfc_sli_ring *);
|
2014-04-05 00:52:02 +07:00
|
|
|
void lpfc_sli_abort_fcp_rings(struct lpfc_hba *phba);
|
2010-10-22 22:06:38 +07:00
|
|
|
void lpfc_sli_hba_iocb_abort(struct lpfc_hba *);
|
2019-08-15 06:57:11 +07:00
|
|
|
void lpfc_sli_flush_io_rings(struct lpfc_hba *phba);
|
2005-04-18 04:05:31 +07:00
|
|
|
int lpfc_sli_ringpostbuf_put(struct lpfc_hba *, struct lpfc_sli_ring *,
|
|
|
|
struct lpfc_dmabuf *);
|
|
|
|
struct lpfc_dmabuf *lpfc_sli_ringpostbuf_get(struct lpfc_hba *,
|
|
|
|
struct lpfc_sli_ring *,
|
|
|
|
dma_addr_t);
|
2007-10-28 00:38:00 +07:00
|
|
|
|
|
|
|
uint32_t lpfc_sli_get_buffer_tag(struct lpfc_hba *);
|
|
|
|
struct lpfc_dmabuf * lpfc_sli_ring_taggedbuf_get(struct lpfc_hba *,
|
|
|
|
struct lpfc_sli_ring *, uint32_t );
|
|
|
|
|
2007-08-02 22:10:31 +07:00
|
|
|
int lpfc_sli_hbq_count(void);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
|
2007-06-18 07:56:37 +07:00
|
|
|
void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
|
|
|
|
int lpfc_sli_hbq_size(void);
|
2007-04-25 20:51:38 +07:00
|
|
|
int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
|
|
|
|
struct lpfc_iocbq *);
|
2007-08-02 22:10:31 +07:00
|
|
|
int lpfc_sli_sum_iocb(struct lpfc_vport *, uint16_t, uint64_t, lpfc_ctx_cmd);
|
|
|
|
int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t,
|
|
|
|
uint64_t, lpfc_ctx_cmd);
|
2014-04-05 00:52:31 +07:00
|
|
|
int
|
|
|
|
lpfc_sli_abort_taskmgmt(struct lpfc_vport *, struct lpfc_sli_ring *,
|
|
|
|
uint16_t, uint64_t, lpfc_ctx_cmd);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_mbox_timeout(struct timer_list *t);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mbox_timeout_handler(struct lpfc_hba *);
|
|
|
|
|
2007-06-18 07:56:36 +07:00
|
|
|
struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
|
|
|
|
struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
|
|
|
|
struct lpfc_name *);
|
2019-05-15 04:58:05 +07:00
|
|
|
struct lpfc_nodelist *lpfc_findnode_mapped(struct lpfc_vport *vport);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2008-08-25 08:50:30 +07:00
|
|
|
int lpfc_sli_issue_mbox_wait(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
|
|
|
|
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_sli_issue_iocb_wait(struct lpfc_hba *, uint32_t,
|
2008-08-25 08:50:30 +07:00
|
|
|
struct lpfc_iocbq *, struct lpfc_iocbq *,
|
|
|
|
uint32_t);
|
|
|
|
void lpfc_sli_abort_fcp_cmpl(struct lpfc_hba *, struct lpfc_iocbq *,
|
|
|
|
struct lpfc_iocbq *);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_sli_free_hbq(struct lpfc_hba *, struct hbq_dmabuf *);
|
2007-06-18 07:56:37 +07:00
|
|
|
|
2005-04-18 04:05:31 +07:00
|
|
|
void *lpfc_mbuf_alloc(struct lpfc_hba *, int, dma_addr_t *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void __lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
|
2017-02-13 04:52:34 +07:00
|
|
|
void *lpfc_nvmet_buf_alloc(struct lpfc_hba *phba, int flags,
|
|
|
|
dma_addr_t *handle);
|
|
|
|
void lpfc_nvmet_buf_free(struct lpfc_hba *phba, void *virtp, dma_addr_t dma);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_in_buf_free(struct lpfc_hba *, struct lpfc_dmabuf *);
|
2017-02-13 04:52:37 +07:00
|
|
|
void lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp);
|
2018-10-24 03:41:04 +07:00
|
|
|
int lpfc_link_reset(struct lpfc_vport *vport);
|
2017-02-13 04:52:37 +07:00
|
|
|
|
2005-04-18 04:05:31 +07:00
|
|
|
/* Function prototypes. */
|
2018-12-14 06:17:57 +07:00
|
|
|
int lpfc_check_pci_resettable(const struct lpfc_hba *phba);
|
2005-04-18 04:05:31 +07:00
|
|
|
const char* lpfc_info(struct Scsi_Host *);
|
2007-04-25 20:53:22 +07:00
|
|
|
int lpfc_scan_finished(struct Scsi_Host *, unsigned long);
|
|
|
|
|
2009-05-23 01:51:39 +07:00
|
|
|
int lpfc_init_api_table_setup(struct lpfc_hba *, uint8_t);
|
|
|
|
int lpfc_sli_api_table_setup(struct lpfc_hba *, uint8_t);
|
|
|
|
int lpfc_scsi_api_table_setup(struct lpfc_hba *, uint8_t);
|
|
|
|
int lpfc_mbox_api_table_setup(struct lpfc_hba *, uint8_t);
|
|
|
|
int lpfc_api_table_setup(struct lpfc_hba *, uint8_t);
|
|
|
|
|
2005-04-18 04:05:31 +07:00
|
|
|
void lpfc_get_cfgparam(struct lpfc_hba *);
|
2007-08-02 22:09:59 +07:00
|
|
|
void lpfc_get_vport_cfgparam(struct lpfc_vport *);
|
2007-06-18 07:56:36 +07:00
|
|
|
int lpfc_alloc_sysfs_attr(struct lpfc_vport *);
|
|
|
|
void lpfc_free_sysfs_attr(struct lpfc_vport *);
|
2008-02-22 06:13:36 +07:00
|
|
|
extern struct device_attribute *lpfc_hba_attrs[];
|
|
|
|
extern struct device_attribute *lpfc_vport_attrs[];
|
2005-04-18 04:05:31 +07:00
|
|
|
extern struct scsi_host_template lpfc_template;
|
scsi: lpfc: Fix eh_deadline setting for sli3 adapters.
A previous change unilaterally removed the hba reset entry point
from the sli3 host template. This was done to allow tape devices
being used for back up from being removed. Why was this done ?
When there was non-responding device on the fabric, the error
escalation policy would escalate to the reset handler. When the
reset handler was called, it would reset the adapter, dropping
link, thus logging out and terminating all i/o's - on any target.
If there was a tape device on the same adapter that wasn't in
error, it would kill the tape i/o's, effectively killing the
tape device state. With the reset point removed, the adapter
reset avoided the fabric logout, allowing the other devices to
continue to operate unaffected. A hack - yes. Hint: we really
need a transport I_T nexus reset callback added to the eh process
(in between the SCSI target reset and hba reset points), so a
fc logout could occur to the one bad target only and stop the error
escalation process.
This patch commonizes the approach so it can be used for sli3 and sli4
adapters, but mandates the admin, via module parameter, specifically
identify which adapters the resets are to be removed for. Additionally,
bus_reset, which sends Target Reset TMFs to all targets, is also removed
from the template as it too has the same effect as the adapter reset.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-03-05 00:30:31 +07:00
|
|
|
extern struct scsi_host_template lpfc_template_no_hr;
|
2017-02-13 04:52:30 +07:00
|
|
|
extern struct scsi_host_template lpfc_template_nvme;
|
2007-08-02 22:09:59 +07:00
|
|
|
extern struct scsi_host_template lpfc_vport_template;
|
2005-04-18 04:05:31 +07:00
|
|
|
extern struct fc_function_template lpfc_transport_functions;
|
2007-10-28 00:37:33 +07:00
|
|
|
extern struct fc_function_template lpfc_vport_transport_functions;
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t);
|
2008-06-15 09:52:59 +07:00
|
|
|
int lpfc_vport_symbolic_port_name(struct lpfc_vport *, char *, size_t);
|
2006-08-19 04:47:08 +07:00
|
|
|
void lpfc_terminate_rport_io(struct fc_rport *);
|
|
|
|
void lpfc_dev_loss_tmo_callbk(struct fc_rport *rport);
|
2005-04-18 04:05:31 +07:00
|
|
|
|
2007-08-02 22:09:59 +07:00
|
|
|
struct lpfc_vport *lpfc_create_port(struct lpfc_hba *, int, struct device *);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_vport_disable(struct fc_vport *fc_vport, bool disable);
|
2008-08-25 08:50:00 +07:00
|
|
|
int lpfc_mbx_unreg_vpi(struct lpfc_vport *);
|
2007-06-18 07:56:36 +07:00
|
|
|
void destroy_port(struct lpfc_vport *);
|
2007-06-18 07:56:38 +07:00
|
|
|
int lpfc_get_instance(void);
|
|
|
|
void lpfc_host_attrib_init(struct Scsi_Host *);
|
|
|
|
|
2007-06-18 07:56:39 +07:00
|
|
|
extern void lpfc_debugfs_initialize(struct lpfc_vport *);
|
|
|
|
extern void lpfc_debugfs_terminate(struct lpfc_vport *);
|
|
|
|
extern void lpfc_debugfs_disc_trc(struct lpfc_vport *, int, char *, uint32_t,
|
2017-02-13 04:52:33 +07:00
|
|
|
uint32_t, uint32_t);
|
2007-08-02 22:09:43 +07:00
|
|
|
extern void lpfc_debugfs_slow_ring_trc(struct lpfc_hba *, char *, uint32_t,
|
2017-02-13 04:52:33 +07:00
|
|
|
uint32_t, uint32_t);
|
|
|
|
extern void lpfc_debugfs_nvme_trc(struct lpfc_hba *phba, char *fmt,
|
|
|
|
uint16_t data1, uint16_t data2, uint32_t data3);
|
2007-08-02 22:10:37 +07:00
|
|
|
extern struct lpfc_hbq_init *lpfc_hbq_defs[];
|
2007-06-18 07:56:39 +07:00
|
|
|
|
2011-05-24 22:44:12 +07:00
|
|
|
/* SLI4 if_type 2 externs. */
|
|
|
|
int lpfc_sli4_alloc_resource_identifiers(struct lpfc_hba *);
|
|
|
|
int lpfc_sli4_dealloc_resource_identifiers(struct lpfc_hba *);
|
2011-07-23 05:37:42 +07:00
|
|
|
int lpfc_sli4_get_allocated_extnts(struct lpfc_hba *, uint16_t,
|
|
|
|
uint16_t *, uint16_t *);
|
|
|
|
int lpfc_sli4_get_avail_extnt_rsrc(struct lpfc_hba *, uint16_t,
|
|
|
|
uint16_t *, uint16_t *);
|
2011-05-24 22:44:12 +07:00
|
|
|
|
2007-06-18 07:56:38 +07:00
|
|
|
/* Interface exported by fabric iocb scheduler */
|
|
|
|
void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
|
|
|
|
void lpfc_fabric_abort_hba(struct lpfc_hba *);
|
2017-09-07 10:24:26 +07:00
|
|
|
void lpfc_fabric_block_timeout(struct timer_list *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_unblock_fabric_iocbs(struct lpfc_hba *);
|
2008-12-05 10:39:29 +07:00
|
|
|
void lpfc_rampdown_queue_depth(struct lpfc_hba *);
|
2007-06-18 07:56:38 +07:00
|
|
|
void lpfc_ramp_down_queue_handler(struct lpfc_hba *);
|
2008-08-25 08:50:11 +07:00
|
|
|
void lpfc_scsi_dev_block(struct lpfc_hba *);
|
2007-06-18 07:56:36 +07:00
|
|
|
|
2008-09-07 22:52:10 +07:00
|
|
|
void
|
|
|
|
lpfc_send_els_failure_event(struct lpfc_hba *, struct lpfc_iocbq *,
|
|
|
|
struct lpfc_iocbq *);
|
|
|
|
struct lpfc_fast_path_event *lpfc_alloc_fast_evt(struct lpfc_hba *);
|
|
|
|
void lpfc_free_fast_evt(struct lpfc_hba *, struct lpfc_fast_path_event *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_create_static_vport(struct lpfc_hba *);
|
|
|
|
void lpfc_stop_hba_timers(struct lpfc_hba *);
|
|
|
|
void lpfc_stop_port(struct lpfc_hba *);
|
2010-02-13 02:41:27 +07:00
|
|
|
void __lpfc_sli4_stop_fcf_redisc_wait_timer(struct lpfc_hba *);
|
|
|
|
void lpfc_sli4_stop_fcf_redisc_wait_timer(struct lpfc_hba *);
|
2009-05-23 01:51:39 +07:00
|
|
|
void lpfc_parse_fcoe_conf(struct lpfc_hba *, uint8_t *, uint32_t);
|
|
|
|
int lpfc_parse_vpd(struct lpfc_hba *, uint8_t *, int);
|
|
|
|
void lpfc_start_fdiscs(struct lpfc_hba *phba);
|
2009-10-03 02:16:45 +07:00
|
|
|
struct lpfc_vport *lpfc_find_vport_by_vpid(struct lpfc_hba *, uint16_t);
|
2010-02-27 02:14:23 +07:00
|
|
|
struct lpfc_sglq *__lpfc_get_active_sglq(struct lpfc_hba *, uint16_t);
|
2007-06-18 07:56:38 +07:00
|
|
|
#define HBA_EVENT_RSCN 5
|
|
|
|
#define HBA_EVENT_LINK_UP 2
|
|
|
|
#define HBA_EVENT_LINK_DOWN 3
|
2009-05-23 01:51:39 +07:00
|
|
|
|
2009-07-19 21:01:32 +07:00
|
|
|
/* functions to support SGIOv4/bsg interface */
|
2016-11-17 16:31:19 +07:00
|
|
|
int lpfc_bsg_request(struct bsg_job *);
|
|
|
|
int lpfc_bsg_timeout(struct bsg_job *);
|
2010-01-27 11:08:55 +07:00
|
|
|
int lpfc_bsg_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
|
2009-07-19 21:01:32 +07:00
|
|
|
struct lpfc_iocbq *);
|
2013-01-04 03:43:37 +07:00
|
|
|
int lpfc_bsg_ct_unsol_abort(struct lpfc_hba *, struct hbq_dmabuf *);
|
2010-06-08 02:24:45 +07:00
|
|
|
void __lpfc_sli_ringtx_put(struct lpfc_hba *, struct lpfc_sli_ring *,
|
|
|
|
struct lpfc_iocbq *);
|
|
|
|
struct lpfc_iocbq *lpfc_sli_ringtx_get(struct lpfc_hba *,
|
|
|
|
struct lpfc_sli_ring *);
|
|
|
|
int __lpfc_sli_issue_iocb(struct lpfc_hba *, uint32_t,
|
|
|
|
struct lpfc_iocbq *, uint32_t);
|
|
|
|
uint32_t lpfc_drain_txq(struct lpfc_hba *);
|
2010-11-21 11:11:55 +07:00
|
|
|
void lpfc_clr_rrq_active(struct lpfc_hba *, uint16_t, struct lpfc_node_rrq *);
|
|
|
|
int lpfc_test_rrq_active(struct lpfc_hba *, struct lpfc_nodelist *, uint16_t);
|
|
|
|
void lpfc_handle_rrq_active(struct lpfc_hba *);
|
|
|
|
int lpfc_send_rrq(struct lpfc_hba *, struct lpfc_node_rrq *);
|
|
|
|
int lpfc_set_rrq_active(struct lpfc_hba *, struct lpfc_nodelist *,
|
|
|
|
uint16_t, uint16_t, uint16_t);
|
2011-07-23 05:36:52 +07:00
|
|
|
uint16_t lpfc_sli4_xri_inrange(struct lpfc_hba *, uint16_t);
|
2011-02-17 00:39:35 +07:00
|
|
|
void lpfc_cleanup_vports_rrqs(struct lpfc_vport *, struct lpfc_nodelist *);
|
2010-11-21 11:11:55 +07:00
|
|
|
struct lpfc_node_rrq *lpfc_get_active_rrq(struct lpfc_vport *, uint16_t,
|
|
|
|
uint32_t);
|
2011-07-23 05:37:42 +07:00
|
|
|
void lpfc_idiag_mbxacc_dump_bsg_mbox(struct lpfc_hba *, enum nemb_type,
|
|
|
|
enum mbox_type, enum dma_type, enum sta_type,
|
|
|
|
struct lpfc_dmabuf *, uint32_t);
|
|
|
|
void lpfc_idiag_mbxacc_dump_issue_mbox(struct lpfc_hba *, MAILBOX_t *);
|
2011-05-24 22:42:45 +07:00
|
|
|
int lpfc_wr_object(struct lpfc_hba *, struct list_head *, uint32_t, uint32_t *);
|
2011-05-24 22:42:11 +07:00
|
|
|
/* functions to support SR-IOV */
|
|
|
|
int lpfc_sli_probe_sriov_nr_virtfn(struct lpfc_hba *, int);
|
2011-07-23 05:37:28 +07:00
|
|
|
uint16_t lpfc_sli_sriov_nr_virtfn_get(struct lpfc_hba *);
|
2011-10-11 08:33:49 +07:00
|
|
|
int lpfc_sli4_queue_create(struct lpfc_hba *);
|
|
|
|
void lpfc_sli4_queue_destroy(struct lpfc_hba *);
|
2011-12-14 01:21:57 +07:00
|
|
|
void lpfc_sli4_abts_err_handler(struct lpfc_hba *, struct lpfc_nodelist *,
|
|
|
|
struct sli4_wcqe_xri_aborted *);
|
2012-08-15 01:25:21 +07:00
|
|
|
void lpfc_sli_abts_recover_port(struct lpfc_vport *,
|
|
|
|
struct lpfc_nodelist *);
|
2011-12-14 01:23:09 +07:00
|
|
|
int lpfc_hba_init_link_fc_topology(struct lpfc_hba *, uint32_t, uint32_t);
|
|
|
|
int lpfc_issue_reg_vfi(struct lpfc_vport *);
|
|
|
|
int lpfc_issue_unreg_vfi(struct lpfc_vport *);
|
|
|
|
int lpfc_selective_reset(struct lpfc_hba *);
|
2012-05-10 08:16:12 +07:00
|
|
|
int lpfc_sli4_read_config(struct lpfc_hba *);
|
|
|
|
void lpfc_sli4_node_prep(struct lpfc_hba *);
|
2017-02-13 04:52:30 +07:00
|
|
|
int lpfc_sli4_els_sgl_update(struct lpfc_hba *phba);
|
2017-02-13 04:52:34 +07:00
|
|
|
int lpfc_sli4_nvmet_sgl_update(struct lpfc_hba *phba);
|
2019-01-29 02:14:22 +07:00
|
|
|
int lpfc_io_buf_flush(struct lpfc_hba *phba, struct list_head *sglist);
|
|
|
|
int lpfc_io_buf_replenish(struct lpfc_hba *phba, struct list_head *cbuf);
|
|
|
|
int lpfc_sli4_io_sgl_update(struct lpfc_hba *phba);
|
|
|
|
int lpfc_sli4_post_io_sgl_list(struct lpfc_hba *phba,
|
|
|
|
struct list_head *blist, int xricnt);
|
|
|
|
int lpfc_new_io_buf(struct lpfc_hba *phba, int num_to_alloc);
|
scsi: lpfc: Adapt partitioned XRI lists to efficient sharing
The XRI get/put lists were partitioned per hardware queue. However, the
adapter rarely had sufficient resources to give a large number of resources
per queue. As such, it became common for a cpu to encounter a lack of XRI
resource and request the upper io stack to retry after returning a BUSY
condition. This occurred even though other cpus were idle and not using
their resources.
Create as efficient a scheme as possible to move resources to the cpus that
need them. Each cpu maintains a small private pool which it allocates from
for io. There is a watermark that the cpu attempts to keep in the private
pool. The private pool, when empty, pulls from a global pool from the
cpu. When the cpu's global pool is empty it will pull from other cpu's
global pool. As there many cpu global pools (1 per cpu or hardware queue
count) and as each cpu selects what cpu to pull from at different rates and
at different times, it creates a radomizing effect that minimizes the
number of cpu's that will contend with each other when the steal XRI's from
another cpu's global pool.
On io completion, a cpu will push the XRI back on to its private pool. A
watermark level is maintained for the private pool such that when it is
exceeded it will move XRI's to the CPU global pool so that other cpu's may
allocate them.
On NVME, as heartbeat commands are critical to get placed on the wire, a
single expedite pool is maintained. When a heartbeat is to be sent, it will
allocate an XRI from the expedite pool rather than the normal cpu
private/global pools. On any io completion, if a reduction in the expedite
pools is seen, it will be replenished before the XRI is placed on the cpu
private pool.
Statistics are added to aid understanding the XRI levels on each cpu and
their behaviors.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-01-29 02:14:28 +07:00
|
|
|
void lpfc_io_free(struct lpfc_hba *phba);
|
2012-05-10 08:16:12 +07:00
|
|
|
void lpfc_free_sgl_list(struct lpfc_hba *, struct list_head *);
|
2012-09-29 22:32:37 +07:00
|
|
|
uint32_t lpfc_sli_port_speed_get(struct lpfc_hba *);
|
2012-11-01 01:44:33 +07:00
|
|
|
int lpfc_sli4_request_firmware_update(struct lpfc_hba *, uint8_t);
|
2013-04-18 07:18:39 +07:00
|
|
|
void lpfc_sli4_offline_eratt(struct lpfc_hba *);
|
2014-02-20 21:56:45 +07:00
|
|
|
|
|
|
|
struct lpfc_device_data *lpfc_create_device_data(struct lpfc_hba *,
|
|
|
|
struct lpfc_name *,
|
|
|
|
struct lpfc_name *,
|
2016-12-20 06:07:26 +07:00
|
|
|
uint64_t, uint32_t, bool);
|
2014-02-20 21:56:45 +07:00
|
|
|
void lpfc_delete_device_data(struct lpfc_hba *, struct lpfc_device_data*);
|
|
|
|
struct lpfc_device_data *__lpfc_get_device_data(struct lpfc_hba *,
|
|
|
|
struct list_head *list,
|
|
|
|
struct lpfc_name *,
|
|
|
|
struct lpfc_name *, uint64_t);
|
|
|
|
bool lpfc_enable_oas_lun(struct lpfc_hba *, struct lpfc_name *,
|
2016-07-07 02:36:05 +07:00
|
|
|
struct lpfc_name *, uint64_t, uint8_t);
|
2014-02-20 21:56:45 +07:00
|
|
|
bool lpfc_disable_oas_lun(struct lpfc_hba *, struct lpfc_name *,
|
2016-12-20 06:07:26 +07:00
|
|
|
struct lpfc_name *, uint64_t, uint8_t);
|
2014-02-20 21:56:45 +07:00
|
|
|
bool lpfc_find_next_oas_lun(struct lpfc_hba *, struct lpfc_name *,
|
|
|
|
struct lpfc_name *, uint64_t *, struct lpfc_name *,
|
2016-12-20 06:07:26 +07:00
|
|
|
struct lpfc_name *, uint64_t *,
|
|
|
|
uint32_t *, uint32_t *);
|
2015-05-22 00:55:21 +07:00
|
|
|
int lpfc_sli4_dump_page_a0(struct lpfc_hba *phba, struct lpfcMboxq *mbox);
|
|
|
|
void lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb);
|
2017-02-13 04:52:30 +07:00
|
|
|
|
2018-09-11 00:30:50 +07:00
|
|
|
/* RAS Interface */
|
|
|
|
void lpfc_sli4_ras_init(struct lpfc_hba *phba);
|
|
|
|
void lpfc_sli4_ras_setup(struct lpfc_hba *phba);
|
|
|
|
int lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, uint32_t fwlog_level,
|
|
|
|
uint32_t fwlog_enable);
|
2018-11-30 07:09:39 +07:00
|
|
|
void lpfc_ras_stop_fwlog(struct lpfc_hba *phba);
|
2018-09-11 00:30:50 +07:00
|
|
|
int lpfc_check_fwlog_support(struct lpfc_hba *phba);
|
|
|
|
|
2017-02-13 04:52:30 +07:00
|
|
|
/* NVME interfaces. */
|
2019-05-15 04:58:07 +07:00
|
|
|
void lpfc_nvme_rescan_port(struct lpfc_vport *vport,
|
|
|
|
struct lpfc_nodelist *ndlp);
|
2017-02-13 04:52:32 +07:00
|
|
|
void lpfc_nvme_unregister_port(struct lpfc_vport *vport,
|
|
|
|
struct lpfc_nodelist *ndlp);
|
|
|
|
int lpfc_nvme_register_port(struct lpfc_vport *vport,
|
|
|
|
struct lpfc_nodelist *ndlp);
|
|
|
|
int lpfc_nvme_create_localport(struct lpfc_vport *vport);
|
|
|
|
void lpfc_nvme_destroy_localport(struct lpfc_vport *vport);
|
|
|
|
void lpfc_nvme_update_localport(struct lpfc_vport *vport);
|
2017-02-13 04:52:37 +07:00
|
|
|
int lpfc_nvmet_create_targetport(struct lpfc_hba *phba);
|
|
|
|
int lpfc_nvmet_update_targetport(struct lpfc_hba *phba);
|
|
|
|
void lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba);
|
|
|
|
void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb);
|
2017-08-24 06:55:42 +07:00
|
|
|
void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx,
|
scsi: lpfc: Separate CQ processing for nvmet_fc upcalls
Currently the driver is notified of new command frame receipt by CQEs. As
part of the CQE processing, the driver upcalls the nvmet_fc transport to
deliver the command. nvmet_fc, as part of receiving the command builds out
a context for it, where one of the first steps is to allocate memory for
the io.
When running with tests that do large ios (1MB), it was found on some
systems, the total number of outstanding I/O's, at 1MB per, completely
consumed the system's memory. Thus additional ios were getting blocked in
the memory allocator. Given that this blocked the lpfc thread processing
CQEs, there were lots of other commands that were received and which are
then held up, and given CQEs are serially processed, the aggregate delays
for an IO waiting behind the others became cummulative - enough so that the
initiator hit timeouts for the ios.
The basic fix is to avoid the direct upcall and instead schedule a work
item for each io as it is received. This allows the cq processing to
complete very quickly, and each io can then run or block on it's own.
However, this general solution hurts latency when there are few ios. As
such, implemented the fix such that the driver watches how many CQEs it has
processed sequentially in one run. As long as the count is below a
threshold, the direct nvmet_fc upcall will be made. Only when the count is
exceeded will it revert to work scheduling.
Given that debug of this showed a surprisingly long delay in cq processing,
the io timer stats were updated to better reflect the processing of the
different points.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-05-22 07:48:55 +07:00
|
|
|
struct rqb_dmabuf *nvmebuf, uint64_t isr_ts,
|
|
|
|
uint8_t cqflag);
|
2017-02-13 04:52:30 +07:00
|
|
|
void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba);
|
2017-02-13 04:52:32 +07:00
|
|
|
void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_iocbq *cmdiocb,
|
|
|
|
struct lpfc_wcqe_complete *abts_cmpl);
|
scsi: lpfc: Adapt partitioned XRI lists to efficient sharing
The XRI get/put lists were partitioned per hardware queue. However, the
adapter rarely had sufficient resources to give a large number of resources
per queue. As such, it became common for a cpu to encounter a lack of XRI
resource and request the upper io stack to retry after returning a BUSY
condition. This occurred even though other cpus were idle and not using
their resources.
Create as efficient a scheme as possible to move resources to the cpus that
need them. Each cpu maintains a small private pool which it allocates from
for io. There is a watermark that the cpu attempts to keep in the private
pool. The private pool, when empty, pulls from a global pool from the
cpu. When the cpu's global pool is empty it will pull from other cpu's
global pool. As there many cpu global pools (1 per cpu or hardware queue
count) and as each cpu selects what cpu to pull from at different rates and
at different times, it creates a radomizing effect that minimizes the
number of cpu's that will contend with each other when the steal XRI's from
another cpu's global pool.
On io completion, a cpu will push the XRI back on to its private pool. A
watermark level is maintained for the private pool such that when it is
exceeded it will move XRI's to the CPU global pool so that other cpu's may
allocate them.
On NVME, as heartbeat commands are critical to get placed on the wire, a
single expedite pool is maintained. When a heartbeat is to be sent, it will
allocate an XRI from the expedite pool rather than the normal cpu
private/global pools. On any io completion, if a reduction in the expedite
pools is seen, it will be replenished before the XRI is placed on the cpu
private pool.
Statistics are added to aid understanding the XRI levels on each cpu and
their behaviors.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-01-29 02:14:28 +07:00
|
|
|
void lpfc_create_multixri_pools(struct lpfc_hba *phba);
|
|
|
|
void lpfc_create_destroy_pools(struct lpfc_hba *phba);
|
|
|
|
void lpfc_move_xri_pvt_to_pbl(struct lpfc_hba *phba, u32 hwqid);
|
|
|
|
void lpfc_move_xri_pbl_to_pvt(struct lpfc_hba *phba, u32 hwqid, u32 cnt);
|
|
|
|
void lpfc_adjust_high_watermark(struct lpfc_hba *phba, u32 hwqid);
|
|
|
|
void lpfc_keep_pvt_pool_above_lowwm(struct lpfc_hba *phba, u32 hwqid);
|
|
|
|
void lpfc_adjust_pvt_pool_count(struct lpfc_hba *phba, u32 hwqid);
|
|
|
|
#ifdef LPFC_MXP_STAT
|
|
|
|
void lpfc_snapshot_mxp(struct lpfc_hba *, u32);
|
|
|
|
#endif
|
|
|
|
struct lpfc_io_buf *lpfc_get_io_buf(struct lpfc_hba *phba,
|
|
|
|
struct lpfc_nodelist *ndlp, u32 hwqid,
|
|
|
|
int);
|
|
|
|
void lpfc_release_io_buf(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd,
|
|
|
|
struct lpfc_sli4_hdw_queue *qp);
|
2018-03-06 03:04:04 +07:00
|
|
|
void lpfc_nvme_cmd_template(void);
|
2018-03-06 03:04:05 +07:00
|
|
|
void lpfc_nvmet_cmd_template(void);
|
scsi: lpfc: Fix hang when downloading fw on port enabled for nvme
As part of firmware download, the adapter is reset. On the adapter the
reset causes the function to stop and all outstanding io is terminated
(without responses). The reset path then starts teardown of the adapter,
starting with deregistration of the remote ports with the nvme-fc
transport. The local port is then deregistered and the driver waits for
local port deregistration. This never finishes.
The remote port deregistrations terminated the nvme controllers, causing
them to send aborts for all the outstanding io. The aborts were serviced in
the driver, but stalled due to its state. The nvme layer then stops to
reclaim it's outstanding io before continuing. The io must be returned
before the reset on the controller is deemed complete and the controller
delete performed. The remote port deregistration won't complete until all
the controllers are terminated. And the local port deregistration won't
complete until all controllers and remote ports are terminated. Thus things
hang.
The issue is the reset which stopped the adapter also stopped all the
responses that would drive i/o completions, and the aborts were also
stopped that stopped i/o completions. The driver, when resetting the
adapter like this, needs to be generating the completions as part of the
adapter reset so that I/O complete (in error), and any aborts are not
queued.
Fix by adding flush routines whenever the adapter port has been reset or
discovered in error. The flush routines will generate the completions for
the scsi and nvme outstanding io. The abort ios, if waiting, will be caught
and flushed as well.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-08-15 06:56:55 +07:00
|
|
|
void lpfc_nvme_cancel_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn);
|
2019-09-22 10:58:56 +07:00
|
|
|
void lpfc_nvme_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt);
|
2017-02-13 04:52:34 +07:00
|
|
|
extern int lpfc_enable_nvmet_cnt;
|
|
|
|
extern unsigned long long lpfc_enable_nvmet[];
|
scsi: lpfc: Fix eh_deadline setting for sli3 adapters.
A previous change unilaterally removed the hba reset entry point
from the sli3 host template. This was done to allow tape devices
being used for back up from being removed. Why was this done ?
When there was non-responding device on the fabric, the error
escalation policy would escalate to the reset handler. When the
reset handler was called, it would reset the adapter, dropping
link, thus logging out and terminating all i/o's - on any target.
If there was a tape device on the same adapter that wasn't in
error, it would kill the tape i/o's, effectively killing the
tape device state. With the reset point removed, the adapter
reset avoided the fabric logout, allowing the other devices to
continue to operate unaffected. A hack - yes. Hint: we really
need a transport I_T nexus reset callback added to the eh process
(in between the SCSI target reset and hba reset points), so a
fc logout could occur to the one bad target only and stop the error
escalation process.
This patch commonizes the approach so it can be used for sli3 and sli4
adapters, but mandates the admin, via module parameter, specifically
identify which adapters the resets are to be removed for. Additionally,
bus_reset, which sends Target Reset TMFs to all targets, is also removed
from the template as it too has the same effect as the adapter reset.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-03-05 00:30:31 +07:00
|
|
|
extern int lpfc_no_hba_reset_cnt;
|
|
|
|
extern unsigned long lpfc_no_hba_reset[];
|