When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
This cleans up a lot of unneeded code and logic around the debugfs wimax
files, making all of this much simpler and easier to understand.
Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
Cc: linux-wimax@intel.com
Cc: netdev@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This series includes update to mlx5 ethernet and core driver:
In first #11 patches, Vlad submits part 2 of 3 part series to allow
TC flow handling for concurrent execution.
1) TC flow handling for concurrent execution (part 2)
Vald Says:
==========
Refactor data structures that are shared between flows in tc.
Currently, all cls API hardware offloads driver callbacks require caller
to hold rtnl lock when calling them. Cls API has already been updated to
update software filters in parallel (on classifiers that support
unlocked execution), however hardware offloads code still obtains rtnl
lock before calling driver tc callbacks. This set implements support for
unlocked execution of tc hairpin, mod_hdr and encap subsystem. The
changed implemented in these subsystems are very similar in general.
The main difference is that hairpin is accessed through mlx5e_tc_table
(legacy mode), mod_hdr is accessed through both mlx5e_tc_table and
mlx5_esw_offload (legacy and switchdev modes) and encap is only accessed
through mlx5_esw_offload (switchdev mode).
1.1) Hairpin handling and structure mlx5e_hairpin_entry refactored in
following way:
- Hairpin structure is extended with atomic reference counter. This
approach allows to lookup of hairpin entry and obtain reference to it
with hairpin_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of hairpin entry to hardware, the entry
is extended with 'res_ready' completion and is inserted to hairpin_tbl
before calling the firmware. With this approach any concurrent users that
attempt to use the same hairpin entry wait for completion first to
prevent access to entries that are not fully initialized.
- Hairpin entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same hairpin entry.
1.2) Modify header handling code and structure mlx5e_mod_hdr_entry
are refactored in the following way:
- Mod_hdr structure is extended with atomic reference counter. This
approach allows to lookup of mod_hdr entry and obtain reference to it
with mod_hdr_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of mod_hdr entry to hardware, the entry
is extended with 'res_ready' completion and is inserted to mod_hdr_tbl
before calling the firmware. With this approach any concurrent users that
attempt to use the same mod_hdr entry wait for completion first to
prevent access to entries that are not fully initialized.
- Mod_Hdr entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same mod_hdr entry.
1.3) Encapsulation handling code and Structure mlx5e_encap_entry
are refactored in the following way:
- encap structure is extended with atomic reference counter. This
approach allows to lookup of encap entry and obtain reference to it
with encap_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of encap entry to hardware, the entry is
extended with 'res_ready' completion and is inserted to encap_tbl before
calling the firmware. With this approach any concurrent users that
attempt to use the same encap entry wait for completion first to prevent
access to entries that are not fully initialized.
- As a difference from approach used to refactor hairpin and mod_hdr,
encap entry is not extended with any per-entry fine-grained lock.
Instead, encap_table_lock is used to synchronize all operations on
encap table and instances of mlx5e_encap_entry. This is necessary
because single flow can be attached to multiple encap entries
simultaneously. During new flow creation or neigh update event all of
encaps that flow is attached to must be accessed together as in atomic
manner, which makes usage of per-entry lock infeasible.
- Encap entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same encap entry.
==========
3) Parav improves the way port representors report their parent ID and
port index.
4) Use refcount_t for refcount in vxlan data base from Chuhong Yuan
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAl1N64MACgkQSD+KveBX
+j4iZAf/cXbX7B6QamcslzKR0HXUWeFBxj+6xrohlB4g4jAr62FbcNWbNyho26Fy
ePZB5J2P2yujR7a7aDpGwPUFw42kRzmg0uvKVGW95459hVwx7fXaOWX8b9qfF9DK
KJdvxw5s/b92qFMXUp/0mUGOD7Md0Q1Dy07rL0T6mgQGp9iKfennhtgGPBjtEkec
Y8BLtRB4ZX3X16sSEj0Zm3h7IojqXT/0mqqKXoXM2N+kGTmXWAcCTeFdAUh31BMf
ddlgEJu9t2OtLjg0iVKiUKE4r52LjdlJTsnRM0SkkUPSzS/+vI8iUUgF8X/XoqNG
PtncRsSOGiWl2EU2Tb4m5v3obIanfA==
=HzrJ
-----END PGP SIGNATURE-----
Merge tag 'mlx5-updates-2019-08-09' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2019-08-09
This series includes update to mlx5 ethernet and core driver:
In first #11 patches, Vlad submits part 2 of 3 part series to allow
TC flow handling for concurrent execution.
1) TC flow handling for concurrent execution (part 2)
Vald Says:
==========
Refactor data structures that are shared between flows in tc.
Currently, all cls API hardware offloads driver callbacks require caller
to hold rtnl lock when calling them. Cls API has already been updated to
update software filters in parallel (on classifiers that support
unlocked execution), however hardware offloads code still obtains rtnl
lock before calling driver tc callbacks. This set implements support for
unlocked execution of tc hairpin, mod_hdr and encap subsystem. The
changed implemented in these subsystems are very similar in general.
The main difference is that hairpin is accessed through mlx5e_tc_table
(legacy mode), mod_hdr is accessed through both mlx5e_tc_table and
mlx5_esw_offload (legacy and switchdev modes) and encap is only accessed
through mlx5_esw_offload (switchdev mode).
1.1) Hairpin handling and structure mlx5e_hairpin_entry refactored in
following way:
- Hairpin structure is extended with atomic reference counter. This
approach allows to lookup of hairpin entry and obtain reference to it
with hairpin_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of hairpin entry to hardware, the entry
is extended with 'res_ready' completion and is inserted to hairpin_tbl
before calling the firmware. With this approach any concurrent users that
attempt to use the same hairpin entry wait for completion first to
prevent access to entries that are not fully initialized.
- Hairpin entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same hairpin entry.
1.2) Modify header handling code and structure mlx5e_mod_hdr_entry
are refactored in the following way:
- Mod_hdr structure is extended with atomic reference counter. This
approach allows to lookup of mod_hdr entry and obtain reference to it
with mod_hdr_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of mod_hdr entry to hardware, the entry
is extended with 'res_ready' completion and is inserted to mod_hdr_tbl
before calling the firmware. With this approach any concurrent users that
attempt to use the same mod_hdr entry wait for completion first to
prevent access to entries that are not fully initialized.
- Mod_Hdr entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same mod_hdr entry.
1.3) Encapsulation handling code and Structure mlx5e_encap_entry
are refactored in the following way:
- encap structure is extended with atomic reference counter. This
approach allows to lookup of encap entry and obtain reference to it
with encap_tbl_lock protection and then continue using the entry
unlocked (including provisioning to hardware).
- To support unlocked provisioning of encap entry to hardware, the entry is
extended with 'res_ready' completion and is inserted to encap_tbl before
calling the firmware. With this approach any concurrent users that
attempt to use the same encap entry wait for completion first to prevent
access to entries that are not fully initialized.
- As a difference from approach used to refactor hairpin and mod_hdr,
encap entry is not extended with any per-entry fine-grained lock.
Instead, encap_table_lock is used to synchronize all operations on
encap table and instances of mlx5e_encap_entry. This is necessary
because single flow can be attached to multiple encap entries
simultaneously. During new flow creation or neigh update event all of
encaps that flow is attached to must be accessed together as in atomic
manner, which makes usage of per-entry lock infeasible.
- Encap entry is extended with new flows_lock spinlock to protect the
list when multiple concurrent tc instances update flows attached to
the same encap entry.
==========
3) Parav improves the way port representors report their parent ID and
port index.
4) Use refcount_t for refcount in vxlan data base from Chuhong Yuan
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Most of the tests run by fcnal-test.sh relies on the nettest command.
Rather than trying to cover all of the individual tests, check for the
binary only at the beginning.
Also removes the need for log_error which is undefined.
Fixes: 6f9d5cacfe ("selftests: Setup for functional tests for fib and socket lookups")
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
refcount_t is better for reference counters since its
implementation can prevent overflows.
So convert atomic_t ref counters to refcount_t.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
It is desired to use unique port indices when multiple pci devices'
devlink instance have the same switch-id.
Make use of vhca-id to generate such unique devlink port indices.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
System image GUID doesn't depend on eswitch switchdev mode.
Hence, remove the check which simplifies the code.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Currently mlx5_eswitch_rep stores same hw ID for all representors.
However it is never used from this structure.
It is always used from mlx5_vport.
Hence, remove unused field.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Encap entries creation is fully synchronized by encap_tbl_lock. In order to
allow concurrent allocation of hardware resources used to offload
encapsulation, extend mlx5e_encap_entry with 'res_ready' completion. Move
call to mlx5e_tc_tun_create_header_ipv{4|6}() out of encap_tbl_lock
critical section. Modify code that attaches new flows to existing encap to
wait for 'res_ready' completion before using the entry. Insert encap entry
to table before provisioning it to hardware and modify all users of the
encap table to verify that encap was fully initialized by checking
completion result for non-zero value (and to wait for 'res_ready'
completion, if necessary).
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
To remove dependency on rtnl lock, protect encap hash table from concurrent
modifications with new "encap_tbl_lock" mutex. Use the mutex to protect
internal encap entry state from concurrent modification. This is necessary
because a flow can be attached to multiple encap entries simultaneously,
which significantly complicates using finer grained per-entry lock.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
List of flows attached to encap entry is used as implicit reference
counter (encap entry is deallocated when list becomes free) and as a
mechanism to obtain encap entry that flow is attached to (through list
head). This is not safe when concurrent modification of list of flows
attached to encap entry is possible. Proper atomic reference counter is
required to support concurrent access.
As a preparation for extending encap with reference counting, extract code
that lookups and deletes encap entry into standalone put/get helpers. In
order to remove this dependency on external locking, extend encap entry
with reference counter to manage its lifetime and extend flow structure
with direct pointer to encap entry that flow is attached to.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Mod_hdr entries creation is fully synchronized by mod_hdr_tbl->lock. In
order to allow concurrent allocation of hardware resources used to offload
header rewrite, extend mlx5e_mod_hdr_entry with 'res_ready' completion.
Move call to mlx5_modify_header_alloc() out of mod_hdr_tbl->lock critical
section. Modify code that attaches new flows to existing mh to wait for
'res_ready' completion before using the entry. Insert mh to mod_hdr table
before provisioning it to hardware and modify all users of mod_hdr table to
verify that mh was fully initialized by checking completion result for
negative value (and to wait for 'res_ready' completion, if necessary).
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
To remove dependency on rtnl lock, protect mod_hdr hash table from
concurrent modifications with new mutex.
Implement helper function to get flow namespace to prevent code
duplication.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
To remove dependency on rtnl lock, extend mod header entry with spinlock
and use it to protect list of flows attached to mod header entry from
concurrent modifications.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
List of flows attached to mod header entry is used as implicit reference
counter (mod header entry is deallocated when list becomes free) and as a
mechanism to obtain mod header entry that flow is attached to (through list
head). This is not safe when concurrent modification of list of flows
attached to mod header entry is possible. Proper atomic reference counter
is required to support concurrent access.
As a preparation for extending mod header with reference counting, extract
code that lookups and deletes mod header entry into standalone put/get
helpers. In order to remove this dependency on external locking, extend mod
header entry with reference counter to manage its lifetime and extend flow
structure with direct pointer to mod header entry that flow is attached to.
To remove code duplication between legacy and switchdev mode
implementations that both support mod_hdr functionality, store mod_hdr
table in dedicated structure used by both fdb and kernel namespaces. New
table structure is extended with table lock by one of the following patches
in this series. Implement helper function to get correct mod_hdr table
depending on flow namespace.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Hairpin entries creation is fully synchronized by hairpin_tbl_lock. In
order to allow concurrent initialization of mlx5e_hairpin structure
instances and provisioning of hairpin entries to hardware, extend
mlx5e_hairpin_entry with 'res_ready' completion. Move call to
mlx5e_hairpin_create() out of hairpin_tbl_lock critical section. Modify
code that attaches new flows to existing hpe to wait for 'res_ready'
completion before using the hpe. Insert hpe to hairpin table before
provisioning it to hardware and modify all users of hairpin table to verify
that hpe was fully initialized by checking hpe->hp pointer (and to wait for
'res_ready' completion, if necessary).
Modify dead peer update event handling function to save hpe's to temporary
list with their reference counter incremented. Wait for completion of hpe's
in temporary list and update their 'peer_gone' flag outside of
hairpin_tbl_lock critical section.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
To remove dependency on rtnl lock, protect hairpin hash table from
concurrent modifications with new "hairpin_tbl_lock" mutex.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
To remove dependency on rtnl lock, extend hairpin entry with spinlock and
use it to protect list of flows attached to hairpin entry from concurrent
modifications.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
List of flows attached to hairpin entry is used as implicit reference
counter (hairpin entry is deallocated when list becomes free) and as a
mechanism to obtain hairpin entry that flow is attached to (through list
head). This is not safe when concurrent modification of list of flows
attached to hairpin entry is possible. Proper atomic reference counter is
required to support concurrent access.
As a preparation for extending hairpin with reference counting, extract
code that deletes hairpin entry into standalone function. In order to
remove this dependency on external locking, extend hairpin entry with
reference counter to manage its lifetime and extend flow structure with
direct pointer to hairpin entry that flow is attached to.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Huazhong Tan says:
====================
net: hns3: add some bugfixes & optimizations & cleanups for HNS3 driver
This patch-set includes code optimizations, bugfixes and cleanups for
the HNS3 ethernet controller driver.
[patch 01/12] fixes a GFP flag error.
[patch 02/12] fixes a VF interrupt error.
[patch 03/12] adds a cleanup for VLAN handling.
[patch 04/12] fixes a bug in debugfs.
[patch 05/12] modifies pause displaying format.
[patch 06/12] adds more DFX information for ethtool -d.
[patch 07/12] adds more TX statistics information.
[patch 08/12] adds a check for TX BD number.
[patch 09/12] adds a cleanup for dumping NCL_CONFIG.
[patch 10/12] refines function for querying MAC pause statistics.
[patch 11/12] adds a handshake with VF when doing PF reset.
[patch 12/12] refines some macro definitions.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Macro arguments should be enclosed in parentheses, in case of
expression argument, but parentheses of pure number in macro
definition should be removed for simplicity.
Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before PF asserting function reset, it should make sure
that all its VFs have been ready, otherwise, it will cause
some hardware errors.
So this patch adds function hclge_func_reset_sync_vf() to
synchronize VF before asserting PF function reset. For new
firmware which supports command HCLGE_OPC_QUERY_VF_RST_RDY,
we will try to query VFs' ready status within 30 seconds.
And keep the old implementation for compatible with firmware
which does not support this command.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch refines the interface for querying MAC pause
statistics, and adds structure hns3_mac_stats to keep the
count of TX & RX.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds a new function hclge_ncl_config_data_print()
to print the data of NCL_CONFIG, to make the code more
readable. Also, using macro replaces some magic number.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hardware supports up to 8 TX BD for non-TSO skb and 63 TX
BD for TSO skb. Currently hns3 driver does not check the max
BD num that required by a skb before filling desc, which may
cause the hardware to issue a RAS error throug PCIe AER.
This patch adds the max BD num check before filling desc,
if the bd num is not within the hardware limit, it will
record the error by ring->stats.sw_err_cnt counter and
free the skb.
This patch also cleans up the hns3_nic_bd_num function by
changing the return type and removing an unnecessary check.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds tx_vlan_err, tx_l4_proto_err, tx_l2l3l4_err
and tx_tso_err counter to tx process, in order to better
debug the desc filling error.
This patch also adds a missing u64_stats_update_* around
ring->stats.sw_err_cnt and adds hns3_rl_err to limit the
error printing in the IO patch.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now we can use ethtool -d command to dump some registers. However,
these registers information is not enough to find out where the problem is.
This patch adds DFX registers information after original registers
when use ethtool -d commmand to dump registers. Also, using macro
replaces some related magic number.
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, the pause options of HNS3 shown like this:
"RX/TX" is always the same with "RX negotiated/TX negotiated".
Because of the driver covered the value of "RX/TX" with the value
of "RX negotiated/TX negotiated" after adjust link.
This patch records the pause configurations of the user, and never
covered them in adjust link.
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If the input length reaches the maximum value of size_t, the reverse is
triggered when 1 is added. In addition, there is no need to have such a
large length. Therefore, the input length should be checked and the value
should be less than or equal to 1024.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch refactors the hns3_fill_desc_vtags function
by avoiding passing too many parameters, reducing indent
level and some other clean up.
This patch also adds the hns3_fill_skb_desc function to
fill the first desc of a skb.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, VF driver has two kinds of interrupts, reset & CMDQ RX.
For revision 0x21, according to the UM, each interrupt should be
cleared by write 0 to the corresponding bit, but the implementation
writes 0 to the whole register in fact, it will clear other
interrupt at the same time, then the VF will loss the interrupt.
But for revision 0x20, this interrupt clear register is a read &
write register, for compatible, we just keep the old implementation
for 0x20.
This patch fixes it, also, adds a new register for reading the interrupt
status according to hardware user manual.
Fixes: e2cb1dec97 ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support")
Fixes: b90fcc5bd9 ("net: hns3: add reset handling for VF when doing Core/Global/IMP reset")
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/sched/sch_taprio.c:680:32: warning:
entry_list_policy defined but not used [-Wunused-const-variable=]
One of the points of commit a3d43c0d56 ("taprio: Add support adding
an admin schedule") is that it removes support (it now returns "not
supported") for schedules using the TCA_TAPRIO_ATTR_SCHED_SINGLE_ENTRY
attribute (which were never used), the parsing of those types of schedules
was the only user of this policy. So removing this policy should be fine.
Reported-by: Hulk Robot <hulkci@huawei.com>
Suggested-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Disabling TSO but leaving SG active results is a significant
performance drop. Therefore disable also SG on RTL8168evl.
This restores the original performance.
Fixes: 93681cd7d9 ("r8169: enable HW csum and TSO")
Signed-off-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
TCP_BASE_MSS is used as the default initial MSS value when MTU probing is
enabled. Update the comment to reflect this.
Suggested-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Josh Hunt <johunt@akamai.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The current implementation of TCP MTU probing can considerably
underestimate the MTU on lossy connections allowing the MSS to get down to
48. We have found that in almost all of these cases on our networks these
paths can handle much larger MTUs meaning the connections are being
artificially limited. Even though TCP MTU probing can raise the MSS back up
we have seen this not to be the case causing connections to be "stuck" with
an MSS of 48 when heavy loss is present.
Prior to pushing out this change we could not keep TCP MTU probing enabled
b/c of the above reasons. Now with a reasonble floor set we've had it
enabled for the past 6 months.
The new sysctl will still default to TCP_MIN_SND_MSS (48), but gives
administrators the ability to control the floor of MSS probing.
Signed-off-by: Josh Hunt <johunt@akamai.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The size of the snapshot has to be the same as the size of the region,
therefore no need to pass it again during snapshot creation. Remove the
arg and use region->size instead.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Starting from commit d41a69f1d3 ("tcp: make tcp_sendmsg() aware of socket backlog")
loopback flows got hurt, because for each skb sent, the socket receives an
immediate ACK and sk_flush_backlog() causes extra work.
Intent was to not let the backlog grow too much, but we went a bit too far.
We can check the backlog every 16 skbs (about 1MB chunks)
to increase TCP over loopback performance by about 15 %
Note that the call to sk_flush_backlog() handles a single ACK,
thanks to coalescing done on backlog, but cleans the 16 skbs
found in rtx rb-tree.
Reported-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer says:
====================
V3: Hopefully fixed all issues point out by Yonghong Song
V2: Addressed issues point out by Yonghong Song
- Please ACK patch 2/3 again
- Added ACKs and reviewed-by to other patches
This patchset is focused on improvements for XDP forwarding sample
named xdp_fwd, which leverage the existing FIB routing tables as
described in LPC2018[1] talk by David Ahern.
The primary motivation is to illustrate how Toke's recent work
improves usability of XDP_REDIRECT via lookups in devmap. The other
patches are to help users understand the sample.
I have more improvements to xdp_fwd, but those might requires changes
to libbpf. Thus, sending these patches first as they are isolated.
[1] http://vger.kernel.org/lpc-networking2018.html#session-1
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Make it clear that this XDP program depend on the network
stack to do the ARP resolution. This is connected with the
BPF_FIB_LKUP_RET_NO_NEIGH return code from bpf_fib_lookup().
Another common mistake (seen via XDP-tutorial) is that users
don't realize that sysctl net.ipv{4,6}.conf.all.forwarding
setting is honored by bpf_fib_lookup.
Reported-by: Anton Protopopov <a.s.protopopov@gmail.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This address the TODO in samples/bpf/xdp_fwd_kern.c, which points out
that the chosen egress index should be checked for existence in the
devmap. This can now be done via taking advantage of Toke's work in
commit 0cdbb4b09a ("devmap: Allow map lookups from eBPF").
This change makes xdp_fwd more practically usable, as this allows for
a mixed environment, where IP-forwarding fallback to network stack, if
the egress device isn't configured to use XDP.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The devmap name 'tx_port' came from a copy-paste from xdp_redirect_map
which only have a single TX port. Change name to xdp_tx_ports
to make it more descriptive.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Use kmap instead of page_address as it's not always in low memory.
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
octeon_mbox_process_cmd() directly writes the PCI_EXP_DEVCTL_BCR_FLR
bit, which bypasses timing requirements imposed by the PCIe spec.
This patch fixes the function to use the pcie_flr() interface instead.
Signed-off-by: Denis Efremov <efremov@linux.com>
Reviewed-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We allocate 16kb per rx buffer, so we can avoid some overhead by using
alloc_pages_node directly instead of bothering kmalloc_node. Due to
this change buffers are page-aligned now, therefore the alignment check
can be removed.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes gcc '-Wunused-but-set-variable' warning:
net/sched/sch_fq_codel.c: In function fq_codel_dequeue:
net/sched/sch_fq_codel.c:288:23: warning: variable prev_ecn_mark set but not used [-Wunused-but-set-variable]
net/sched/sch_fq_codel.c:288:6: warning: variable prev_drop_count set but not used [-Wunused-but-set-variable]
They are not used since commit 77ddaff218 ("fq_codel: Kill
useless per-flow dropped statistic")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Extend existing driver for Spectrum and Spectrum-2 ASICs
to support Spectrum-3 ASIC as well.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu says:
====================
net: stmmac: Improvements for -next
[ This is just a rebase of v2 into latest -next in order to avoid a merge
conflict ]
Couple of improvements for -next tree. More info in commit logs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a selftest for the Flexible RX Parser feature.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
XGMAC cores also support the Flexible RX Parser feature. Add the support
for it in the XGMAC core.
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>