Commit Graph

16985 Commits

Author SHA1 Message Date
Florian Fainelli
1105a2d3b3 net: mv643xx_eth: Do not clobber PHY link outside of state machine
Calling phy_read_status() means that we may call into
genphy_read_status() which in turn will use genphy_update_link() which
can make changes to phydev->link outside of the state machine's state
transitions. This is an invalid behavior that is now caught as of
811a919135 ("phy state machine: failsafe leave invalid RUNNING state")

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:48:21 -05:00
David S. Miller
501ec18757 mlx5-updates-2017-01-31
This series includes some updates to mlx5 core and ethernet driver.
 
 We got one patch from Or to fix some static checker warnings.
 
 2nd patche from Dan came to add the support for 128B cache line
 in the HCA, which will configures the hardware to use 128B alignment only
 on systems with 128B cache lines, otherwise it will be kept as the current
 default of 64B.
 
 From me three patches to support no inline copy on TX on ConnectX-5 and
 later HCAs.  Starting with two small infrastructure changes and
 refactoring patches followed by two patches to add the actual support for
 both xmit ndo and XDP xmit routines.
 Last patch is a simple fix to return a mistakenly removed pointer from the
 SQ structure, which was remove in previous submission of mlx5 4K UAR.
 
 Saeed.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJYmQKaAAoJEEg/ir3gV/o+RBMH/RGHNw3yPB2MyWo28V3eabw+
 xl/SymiNOUgmq03ULYoc6xJpi9RCya7m/Kyce1M/M1gSz6LXubG2IDw9QsKV8lnc
 +5rwHCKjop6MdR3khsgqvWqGiKfQN0+QON5MjlPZB3/4u8qFcjauhfXpiX9naMO5
 aB/Sm9zRPwRnsEhy2AwPyZqOxe5boZzHqmZxpthIgPMtqbpBYNkTkooljsj/KqXf
 AO3y/mdGykELPF3lIHTE4X9zixx5s6MrlAYX2uGUrAojs2WVIBsq3iXI/J8X9zs/
 lg7to15WoMttR66vRZ120U6tx17OMmoxuAp+bmgZumabi/wDAZGSy5ELbH28WlY=
 =F+t/
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-updates-2017-01-31' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2017-01-31

This series includes some updates to mlx5 core and ethernet driver.

We got one patch from Or to fix some static checker warnings.

2nd patche from Dan came to add the support for 128B cache line
in the HCA, which will configures the hardware to use 128B alignment only
on systems with 128B cache lines, otherwise it will be kept as the current
default of 64B.

From me three patches to support no inline copy on TX on ConnectX-5 and
later HCAs.  Starting with two small infrastructure changes and
refactoring patches followed by two patches to add the actual support for
both xmit ndo and XDP xmit routines.
Last patch is a simple fix to return a mistakenly removed pointer from the
SQ structure, which was remove in previous submission of mlx5 4K UAR.

Saeed.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:44:08 -05:00
Ivan Khoronzhuk
219189e764 net: ethernet: ti: cpsw: remove netif_trans_update
No need to update jiffies in txq->trans_start twice, it's supposed to be
done in netdev_start_xmit() and anyway is re-written. Also, no reason to
update trans time in case of an error.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:36:53 -05:00
Michael Chan
3841340627 bnxt_en: Add support for XDP_TX action.
Add dedicated transmit function and transmit completion handler for
XDP.  The XDP transmit logic and completion logic are different than
regular TX ring.  The TX buffer is recycled back to the RX ring when
it completes.

v3: Improved the buffer recyling scheme for XDP_TX.

v2: Add trace_xdp_exception().
    Add dma_sync.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Tested-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:31:00 -05:00
Michael Chan
c6d30e8391 bnxt_en: Add basic XDP support.
Add basic ndo_xdp support to setup and query program, configure the NIC
to run in rx page mode, and support XDP_PASS, XDP_DROP, XDP_ABORTED
actions only.

v3: Pass modified offset and length to stack for XDP_PASS.
    Remove Kconfig option.

v2: Added trace_xdp_exception()
    Added dma_syncs.
    Added XDP headroom support.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Tested-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:31:00 -05:00
Michael Chan
fa3e93e86c bnxt_en: Refactor tx completion path.
XDP_TX requires a different function to handle completion.  Add a
function pointer to handle tx completion logic.  Regular TX rings
will be assigned the current bnxt_tx_int() for the ->tx_int()
function pointer.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:59 -05:00
Michael Chan
5f4492493e bnxt_en: Add a set of TX rings to support XDP.
Add logic for an extra set of TX rings for XDP.  If enabled, this
set of TX rings equals the number of RX rings and shares the same
IRQ as the RX ring set.  A new field bp->tx_nr_rings_xdp is added
to keep track of these TX XDP rings.  Adjust all other relevant functions
to handle bp->tx_nr_rings_xdp.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:59 -05:00
Michael Chan
a960dec988 bnxt_en: Add tx ring mapping logic.
To support XDP_TX, we need to add a set of dedicated TX rings, each
associated with the NAPI of an RX ring.  To assign XDP rings and regular
rings in a flexible way, we add a bp->tx_ring_map[] array to do the
remapping.  The netdev txq index is stored in the new field txq_index
so that we can retrieve the netdev txq when handling TX completions.
In this patch, before we introduce XDP_TX, the mapping is 1:1.

v2: Fixed a bug in bnxt_tx_int().

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:59 -05:00
Michael Chan
d1e7925e6d bnxt_en: Centralize logic to reserve rings.
Currently, bnxt_setup_tc() and bnxt_set_channels() have similar and
duplicated code to check and reserve rx and tx rings.  Add a new
function bnxt_reserve_rings() to centralize the logic.  This will
make it easier to add XDP_TX support which requires allocating a
new set of TX rings.

Also, the tx ring checking logic in bnxt_setup_msix() can be removed.
The rings have been reserved before hand.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:59 -05:00
Michael Chan
4e5dbbda4c bnxt_en: Use event bit map in RX path.
In the current code, we have separate rx_event and agg_event parameters
to keep track of rx and aggregation events.  Combine these events into
an u8 event mask with different bits defined for different events.  This
way, it is easier to expand the logic to include XDP tx events.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:59 -05:00
Michael Chan
c61fb99cae bnxt_en: Add RX page mode support.
This mode is to support XDP.  In this mode, each rx ring is configured
with page sized buffers for linear placement of each packet.  MTU will be
restricted to what the page sized buffers can support.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:58 -05:00
Michael Chan
b3dba77cf0 bnxt_en: Parameterize RX buffer offsets.
Convert the global constants BNXT_RX_OFFSET and BNXT_RX_DMA_OFFSET to
device parameters.  This will make it easier to support XDP with
headroom support which requires different RX buffer offsets.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:58 -05:00
Michael Chan
745fc05c9d bnxt_en: Add bp->rx_dir field for rx buffer DMA direction.
When driver is running in XDP mode, rx buffers are DMA mapped as
DMA_BIDIRECTIONAL.  Add a field so the code will map/unmap rx buffers
according to this field.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:58 -05:00
Michael Chan
11cd119d31 bnxt_en: Don't use DEFINE_DMA_UNMAP_ADDR to store DMA address in RX path.
To support XDP_TX, we need the RX buffer's DMA address to transmit the
packet.  Convert the DMA address field to a permanent field in
bnxt_sw_rx_bd.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:57 -05:00
Michael Chan
6bb1947439 bnxt_en: Refactor rx SKB function.
Minor refactoring of bnxt_rx_skb() so that it can easily be replaced by
a new function that handles packets in a single page.  Also, use a
function pointer bp->rx_skb_func() to switch to a new function when
we add the new mode in the next patch.

Add a new field data_ptr that points to the packet data in the
bnxt_sw_rx_bd structure.  The original data field is changed to void
pointer so that it can either hold the kmalloc'ed data or a page
pointer.

The last parameter of bnxt_rx_skb() which was the length parameter is
changed to include the payload offset of the packet in the upper 16 bit.
The offset is needed to support the rx page mode and is not used in
this existing function.

v3: Added a new data_ptr parameter to bp->rx_skb_func().  The caller
has the option to modify the starting address of the packet.  This
will be needed when XDP with headroom support is added.

v2: Changed the name of the last parameter to offset_and_len to make the
code more clear.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:30:57 -05:00
Timur Tabi
b44700e975 net: qcom/emac: add ethool support for setting pause parameters
To support setting the pause parameters, the driver can no longer just
mirror the PHY.  The set_pauseparam feature allows the driver to
force the setting in the MAC, regardless of how the PHY is configured.
This means that we now need to maintain an internal state for pause
frame support, and so get_pauseparam also needs to be updated.

If the interface is already running when the setting is changed, then
the interface is reset.

Note that if the MAC is configured to enable RX pause frame support
(i.e. it transmits pause frames to throttle the other end), but the
PHY is configured to block those frames, then the feature will not work.

Also some buffer size initialization code into emac_init_adapter(),
so that it lives with similar code, including the initializtion of
pause frame support.

Signed-off-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 13:18:52 -05:00
Arnd Bergmann
8d1fb01df8 mlxsw: add psample dependency for spectrum
When PSAMPLE is a loadable module, spectrum must not be built-in:

drivers/net/built-in.o: In function `mlxsw_sp_rx_listener_sample_func':
spectrum.c:(.text+0xe357e): undefined reference to `psample_sample_packet'

This adds a Kconfig dependency to enforce usable configurations.

Fixes: 98d0f7b9ac ("mlxsw: spectrum: Add packet sample offloading support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Yotam Gigi <yotamg@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-07 11:44:12 -05:00
Arnd Bergmann
321fa4ffd9 net/mlx5e: fix another maybe-uninitialized false-positive
In commit abeffce ("net/mlx5e: Fix a -Wmaybe-uninitialized warning"), I fixed a
gcc warning for the ipv4 offload handling. Now we get the same warning for the
added ipv6 support:

drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:815:40: warning: 'out_dev' may be used uninitialized in this function [-Wmaybe-uninitialized]

We can apply the same workaround here as well.

Fixes: ce99f6b97f ("net/mlx5e: Support SRIOV TC encapsulation offloads for IPv6 tunnels")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 16:35:12 -05:00
Parav Pandit
d0d7b10b05 net-next: treewide use is_vlan_dev() helper function.
This patch makes use of is_vlan_dev() function instead of flag
comparison which is exactly done by is_vlan_dev() helper function.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Daniel Jurgens <danielj@mellanox.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jon Maxwell <jmaxwell37@gmail.com>
Acked-by: Johannes Thumshirn <jth@kernel.org>
Acked-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 16:33:29 -05:00
Dan Carpenter
73cfb2a2e4 net/mlx4_en: fix a condition
There is a "||" vs "|" typo here so we test 0x1 instead of 0x6.

Fixes: 1f8176f735 ("net/mlx4_en: Check the enabling pptx/pprx flags in SET_PORT wrapper flow")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 12:01:06 -05:00
Bert Kenward
f820c0ac6c sfc: don't rearm interrupts if busy polling
Since commit 364b605573 ("net: busy-poll: return busypolling status
to drivers"), napi_complete_done() returns a boolean that can be used
by drivers to conditionally rearm interrupts.

Testing with a 7142 shows a small latency improvement of ~100 ns.

Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:59:36 -05:00
Ido Schimmel
fd76d9105b mlxsw: spectrum_router: Fix typo in comment
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:57 -05:00
Ido Schimmel
01b1aa359d mlxsw: spectrum_router: Don't read 'nud_state' without lock
We periodically ask the neighbouring system to try and resolve
neighbours that are used for nexthops, but aren't currently resolved.

However, 'nud_state' is protected by the neighbour lock, so we shouldn't
access it without taking it. Instead, we can simply check the
'connected' field of the neighbour entry, which we update upon
NEIGH_UPDATE events.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:57 -05:00
Ido Schimmel
8a0b727526 mlxsw: spectrum_router: Remove redundant check
We only add neighbour entries that are also used for nexthops to
'nexthop_neighs_list', so when iterating over this list there's no need
to check that the entry is indeed used for nexthops.

Remove the redundant check.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:57 -05:00
Ido Schimmel
5c8802f14a mlxsw: spectrum_router: Simplify neighbour reflection
Up until now we had two interfaces for neighbour related configuration:
ndo_neigh_{construct,destroy} and NEIGH_UPDATE netevents. The ndos were
used to add and remove neighbours from the driver's cache, whereas the
netevent was used to reflect the neighbours into the device's tables.

However, if the NUD state of a neighbour isn't NUD_VALID or if the
neighbour is dead, then there's really no reason for us to keep it
inside our cache. The only exception to this rule are neighbours that
are also used for nexthops, which we periodically refresh to get them
resolved.

We can therefore eliminate the ndo entry point into the driver and
simplify the code, making it similar to the FIB reflection, which is
based solely on events. This also helps us avoid a locking issue, in
which the RIF cache was traversed without proper locking during
insertion into the neigh entry cache.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:56 -05:00
Ido Schimmel
de04b6a358 mlxsw: spectrum_router: Remove unused variable
Since commit 33b1341cd1 ("mlxsw: spectrum_router: Fix handling of
neighbour structure") we no longer use destination IP for neighbour
lookup, so remove it.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:56 -05:00
Ido Schimmel
e60234ddb5 mlxsw: spectrum_router: Use ordered workqueue for neigh updates
We currently associate each neighbour entry with a work item, so it's
not possible to have multiple events queued for the same neighbour
entry. However, this is about to be changed so that the neighbour entry
is only resolved when the work item is scheduled.

The above can result in a mismatch between the kernel's and the device's
neighbour table, unless the associated work items are processed in the
order in which they were submitted.

Do that by migrating the NEIGH_UPDATE work items to be processed in the
ordered workqueue which was recently introduced in mlxsw in commit
a3832b3189 ("mlxsw: core: Create an ordered workqueue for FIB
offload").

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:56 -05:00
Ido Schimmel
a0e4761d9b mlxsw: core: Queue work immediately instead of delaying it
We always use zero delay before queueing a work on the ordered workqueue
('mlxsw_owq'), so use work_struct directly instead of delayable work.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:25:55 -05:00
Saeed Mahameed
8ca967ab67 net/mlx5e: Bring back bfreg uar map dedicated pointer
4K Uar series modified the mlx5e driver to use the new bfreg API,
and mistakenly removed the sq->uar_map iomem data path dedicated
pointer, which was meant to be read from xmit path for cache locality
utilization.

Fix that by returning that pointer to the SQ struct.

Fixes: 7309cb4ad71e ("IB/mlx5: Support 4k UAR for libmlx5")
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2017-02-06 18:20:18 +02:00
Saeed Mahameed
b70149dd7d net/mlx5e: XDP Tx, no inline copy on ConnectX-5
ConnectX-5 and later HW generations will report min inline mode ==
MLX5_INLINE_MODE_NONE, which means driver is not required to copy packet
headers to inline fields of TX WQE.

Avoid copy to inline segment in XDP TX routine when HW inline mode doesn't
require it.

This will improve CPU utilization and boost XDP TX performance.

Tested with xdp2 single flow:
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
HCA: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]

Before: 7.4Mpps
After:  7.8Mpps
Improvement: 5%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2017-02-06 18:20:18 +02:00
Saeed Mahameed
a6f402e499 net/mlx5e: Tx, no inline copy on ConnectX-5
ConnectX-5 and later HW generations will report min inline mode ==
MLX5_INLINE_MODE_NONE, which means driver is not required to copy packet
headers to inline fields of TX WQE.

When inline is not required, vlan insertion will be handled in the
TX descriptor rather than copy to inline.

For LSO case driver is still required to copy headers, for the HW to
duplicate on wire.

This will improve CPU utilization and boost TX performance.

Tested with pktgen burst single flow:
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
HCA: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]

Before: 15.1Mpps
After:  17.2Mpps
Improvement: 14%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2017-02-06 18:20:17 +02:00
Saeed Mahameed
2b31f7ae5f net/mlx5: TX WQE update
Add new TX WQE fields for Connect-X5 vlan insertion support,
type and vlan_tci, when type = MLX5_ETH_WQE_INSERT_VLAN the
HW will insert the vlan and prio fields (vlan_tci) to the packet.

Those bits and the inline header fields are mutually exclusive, and
valid only when:
MLX5_CAP_ETH(mdev, wqe_inline_mode) == MLX5_CAP_INLINE_MODE_NOT_REQUIRED
and MLX5_CAP_ETH(mdev, wqe_vlan_insert),
who will be set in ConnectX-5 and later HW generations.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
2017-02-06 18:20:16 +02:00
Daniel Jurgens
f32f5bd2eb net/mlx5: Configure cache line size for start and end padding
There is a hardware feature that will pad the start or end of a DMA to
be cache line aligned to avoid RMWs on the last cache line. The default
cache line size setting for this feature is 64B. This change configures
the hardware to use 128B alignment on systems with 128B cache lines.

In addition we lower bound MPWRQ stride by HCA cacheline in mlx5e,
MPWRQ stride should be at least the HCA cacheline, the current default
is 64B and in case HCA_CAP.cach_line_128byte capability is set, MPWRQ RX
stride will automatically be aligned to 128B.

Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2017-02-06 18:17:25 +02:00
Elad Raz
e158e5ef24 mlxsw: reg: Fix HTGT register length
HTGT register length is limited to 32 bytes and not 256 bytes.

Signed-off-by: Elad Raz <eladr@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 11:07:21 -05:00
Jingju Hou
b60a00f9c5 net: mvneta: implement .set_wol and .get_wol
The mvneta itself does not support WOL, but the PHY might.
So pass the calls to the PHY

Signed-off-by: Jingju Hou <houjingj@marvell.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-06 10:54:02 -05:00
Eric Dumazet
32e19300a4 aeroflex/greth: use napi_complete_done()
We plan to remove __napi_complete() soon,
this driver is the last user.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
3d1a6333d9 ibm/emac: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete()

We plan to remove __napi_complete() to reduce NAPI complexity.

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
0eb7b85c96 qla3xxx: add GRO support
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
7ea4007757 ks8695net: add GRO support
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.

Note that rx_lock seems to be useless, NAPI logic should
not need this extra care.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
135844ef9f skge: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API and get rid of napi_gro_flush()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
a396178972 ep93xx_eth: add GRO support
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.
4) get rid of baroque code and ease maintenance.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
5b2ec6f2be pcnet32: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
c46e9907d4 amd8111e: add GRO support
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.
4) get rid of baroque code and ease maintenance.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
1fa8c5f33a epic100: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.
4) get rid of baroque code and ease maintenance.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
ab1e7e1d26 8139cp: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.
4) Eventually get rid of napi_gro_flush() in the future.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
Eric Dumazet
617f01211b 8139too: use napi_complete_done()
Use napi_complete_done() instead of __napi_complete() to :

1) Get support of gro_flush_timeout if opt-in
2) Not rearm interrupts for busy-polling users.
3) use standard NAPI API.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-05 16:11:57 -05:00
David S. Miller
a076d1bdc6 Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2017-02-03

This series contains updates to i40e/i40evf only.

Jake fixes up the driver to not call i40e_vsi_kill_vlan() or
i40e_vsi_add_vlan() when the PVID is set or when the VID is less than 1.
Cleaned up a check which really is not needed since there is no real
reason why we cannot just call i40e_del_mac_all_vlan() directly.  Renamed
functions to better reflect their actual purpose and how they function
in a more clear manner.

Bimmy cleans up unused/deprecated macros.

Mitch cleans up unused device ids which were intended for use when
running Linux VF drivers under Hyper-V, but found to be not needed.
Then cleaned up a function that is no longer needed since the client
open and close functions were refactored.  Adds a sleep without timeout
until the reply from the PF driver has been received since the iWARP
client cannot continue until the operation has been completed.

Tushar Dave fixes an issue seen on SPARC where the use of the 'packed'
directive was causing kernel unaligned errors.

Alex does a refactor to pull some data off of the stack and store it
in the transmit buffer info section of the transmit ring.

Alan fixes a bug which was caused by passing a bad register value to the
firmware, by refactoring the macro INTRL_USEC_TO_REG into a static
inline function.  Also added feedback to the user as to the actual
interrupt rate limit being used when it differs from the requested limit.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-04 12:13:27 -05:00
David S. Miller
7a655c6324 enic: Remove local ndo_busy_poll() implementation.
We do polling generically these days.

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-03 17:28:21 -05:00
Eric Dumazet
508aac6dee ixgbevf: get rid of custom busy polling code
In linux-4.5, busy polling was implemented in core
NAPI stack, meaning that all custom implementation can
be removed from drivers.

Not only we remove lot's of code, we also remove one lock
operation in fast path, and allow GRO to do its job.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-03 17:17:53 -05:00
Eric Dumazet
3ffc1af576 ixgbe: get rid of custom busy polling code
In linux-4.5, busy polling was implemented in core
NAPI stack, meaning that all custom implementation can
be removed from drivers.

Not only we remove lot's of code, we also remove one lock
operation in fast path, and allow GRO to do its job.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-03 17:17:52 -05:00