Commit Graph

481936 Commits

Author SHA1 Message Date
David S. Miller
8d326d818a Merge branch 'micrel-next'
Johan Hovold says:

====================
net: phy: micrel: refactoring and KSZ8081/KSZ8091 features

This series cleans up and refactors parts of the micrel PHY driver, and
adds support for broadcast-address-disable and led-mode configuration
for KSZ8081 and KSZ8091 PHYs.

Specifically, this enables dual KSZ8081 setups (which are limited to
using address 0 and 3).

A follow up series will add device-type abstraction which will allow for
further refactoring and shared initialisation code.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:41 -05:00
Johan Hovold
7b52314cc4 net: phy: micrel: enable led-mode for KSZ8081/KSZ8091
Enable led-mode configuration for KSZ8081 and KSZ8091.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:37 -05:00
Johan Hovold
5a16778efc net: phy: micrel: clean up led-mode setup
Clean up led-mode setup by introducing proper defines for PHY Control
registers 1 and 2 and only passing the register to the setup function.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:36 -05:00
Johan Hovold
b7035860a1 net: phy: micrel: refactor led-mode error handling
Refactor led-mode error handling.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:36 -05:00
Johan Hovold
8620546c39 net: phy: micrel: add led-mode sanity check
Make sure never to update more than two bits when setting the led mode,
something which could for example change the reference-clock setting.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:36 -05:00
Johan Hovold
57a38effa5 net: phy: micrel: disable broadcast for KSZ8081/KSZ8091
Disable PHY address 0 as the broadcast address, so that it can be used
as a unique (non-broadcast) address on a shared bus.

Note that this can also be configured using the B-CAST_OFF pin on
KSZ9091, but that KSZ8081 lacks this pin and is also limited to
addresses 0 and 3.

Specifically, this allows for dual KSZ8081 setups.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:36 -05:00
Johan Hovold
bde151296a net: phy: micrel: refactor broadcast disable
Refactor and clean up broadcast disable.

Some Micrel PHYs have a broadcast-off bit in the Operation Mode Strap
Override register which can be used to disable PHY address 0 as the
broadcast address, so that it can be used as a unique (non-broadcast)
address on a shared bus.

Note that the KSZPHY_OMSO_RMII_OVERRIDE bit is set by default on
KSZ8021/8031.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:36 -05:00
Johan Hovold
00aee09500 net: phy: micrel: use BIT macro
Use BIT macro for bitmask definitions.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:35 -05:00
Johan Hovold
5bb8fc0d10 net: phy: micrel: fix config_intr error handling
Make sure never to update the control register with random data (an
error code) by checking the return value after reading it.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:35 -05:00
Johan Hovold
c88c7d3204 dt/bindings: fix documentation of ethernet-phy compatible property
A recent commit extended the documentation of the ethernet-phy
compatible property, but placed the new paragraph under the max-speed
property.

Fixes: f00e756ed1 ("dt: Document a compatible entry for MDIO ethernet
Phys")
Cc: devicetree@vger.kernel.org
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:55:35 -05:00
David S. Miller
59af81a144 Merge branch 'module_phy_driver'
Johan Hovold says:

====================
net: phy: add module_phy_driver macro

Add module_phy_driver macro that can be used by PHY drivers that only
calls phy_driver_register or phy_drivers_register (and the corresponding
unregister functions) in their module init (and exit).

This allows us to eliminate a lot of boilerplate code.

Split in three patches (actual macro and two driver change classes) in
order to facilitate review.
====================

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:53:11 -05:00
Johan Hovold
50fd71507e net: phy: replace phy_drivers_register calls
Replace module init/exit which only calls phy_drivers_register with
module_phy_driver macro.

Tested using Micrel driver, and otherwise compile-tested only.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:52:53 -05:00
Johan Hovold
116dffa0b5 net: phy: replace phy_driver_register calls
Replace module init/exit which only calls phy_driver_register with
module_phy_driver macro.

Compile tested only.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:52:53 -05:00
Johan Hovold
c31accd159 net: phy: add module_phy_driver macro
Add helper macro for PHY drivers which do not do anything special in
module init/exit. This will allow us to eliminate a lot of boilerplate
code.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:52:52 -05:00
David L Stevens
c647cc3fd5 sunvnet: fix NULL pointer dereference
This patch fixes a NULL pointer dereference when __tx_port_find() doesn't
find a matching port.

Signed-off-by: David L Stevens <david.stevens@oracle.com>
Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 13:51:23 -05:00
David S. Miller
ee47ad42c5 Merge branch 'skb_alloc_pages'
Alexander Duyck says:

====================
Replace __skb_alloc_pages with simpler function

This patch series replaces __skb_alloc_pages with a much simpler function,
__dev_alloc_pages.  The main difference between the two is that
__skb_alloc_pages had an sk_buff pointer that was being passed as NULL in
call places where it was called.  In a couple of cases the NULL was passed
by variable and this led to unnecessary code being run.

As such in order to simplify things the __dev_alloc_pages call only takes a
mask and the page order being requested.  In addition it takes advantage of
several behaviors already built into the page allocator so that it can just
set GFP flags unconditionally.

v2: Renamed functions to dev_alloc_page(s) instead of netdev_alloc_page(s)
    Removed __GFP_COLD flag from usb code as it was redundant
v3: Update patch descriptions and subjects to match changes in v2
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:20 -05:00
Alexander Duyck
160d2aba55 net: Remove __skb_alloc_page and __skb_alloc_pages
Remove the two functions which are now dead code.

Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:14 -05:00
Alexander Duyck
42b17f0955 fm10k/igb/ixgbe: Replace __skb_alloc_page with dev_alloc_page
The Intel drivers were pretty much just using the plain vanilla GFP flags
in their calls to __skb_alloc_page so this change makes it so that they use
dev_alloc_page which just uses GFP_ATOMIC for the gfp_flags value.

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Matthew Vick <matthew.vick@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:14 -05:00
Alexander Duyck
5693d284dd phonet: Replace calls to __skb_alloc_page with __dev_alloc_page
Replace the calls to __skb_alloc_page that are passed NULL with calls to
__dev_alloc_page.

In addition remove __GFP_COLD flag from allocations as we only want it for
the Rx buffer which is taken care of by __dev_alloc_skb, not for any
secondary allocations such as the queue element transmit descriptors.

Cc: Oliver Neukum <oliver@neukum.org>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:14 -05:00
Alexander Duyck
aa9cd31c3f cxgb4/cxgb4vf: Replace __skb_alloc_page with __dev_alloc_page
Drop the bloated use of __skb_alloc_page and replace it with
__dev_alloc_page.  In addition update the one other spot that is
allocating a page so that it allocates with the correct flags.

Cc: Hariprasad S <hariprasad@chelsio.com>
Cc: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:13 -05:00
Alexander Duyck
71dfda58aa net: Add device Rx page allocation function
This patch implements __dev_alloc_pages and __dev_alloc_page.  These are
meant to replace the __skb_alloc_pages and __skb_alloc_page functions.  The
reason for doing this is that it occurred to me that __skb_alloc_page is
supposed to be passed an sk_buff pointer, but it is NULL in all cases where
it is used.  Worse is that in the case of ixgbe it is passed NULL via the
sk_buff pointer in the rx_buffer info structure which means the compiler is
not correctly stripping it out.

The naming for these functions is based on dev_alloc_skb and __dev_alloc_skb.
There was originally a netdev_alloc_page, however that was passed a
net_device pointer and this function is not so I thought it best to follow
that naming scheme since that is the same difference between dev_alloc_skb
and netdev_alloc_skb.

In the case of anything greater than order 0 it is assumed that we want a
compound page so __GFP_COMP is set for all allocations as we expect a
compound page when assigning a page frag.

The other change in this patch is to exploit the behaviors of the page
allocator in how it handles flags.  So for example we can always set
__GFP_COMP and __GFP_MEMALLOC since they are ignored if they are not
applicable or are overridden by another flag.

Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:13 -05:00
Joe Perches
6c91023dc3 irda: Remove IRDA_<TYPE> logging macros
And use the more common mechanisms directly.

Other miscellanea:

o Coalesce formats
o Add missing newlines
o Realign arguments
o Remove unnecessary OOM message logging as
  there's a generic stack dump already on OOM.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 18:11:00 -05:00
WANG Cong
09626e9d15 net: kill netif_copy_real_num_queues()
vlan was the only user of netif_copy_real_num_queues(),
but it no longer calls it after
commit 4af429d29b ("vlan: lockless transmit path").
So we can just remove it.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:30:16 -05:00
David S. Miller
2387e3b59f Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2014-11-11

This series contains updates to i40e, i40evf and ixgbe.

Kamil updated the i40e and i40evf driver to poll the firmware slower
since we were polling faster than the firmware could respond.

Shannon updates i40e to add a check to keep the service_task from
running the periodic tasks more than once per second, while still
allowing quick action to service the events.

Jesse cleans up the throttle rate code by fixing the minimum interrupt
throttle rate and removing some unused defines.

Mitch makes the early init admin queue message receive code more robust
by handling messages in a loop and ignoring those that we are not
interested in.  This also gets rid of some scary log messages that
really do not indicate a problem.

Don provides several ixgbe patches, first fixes an issue with x540
completion timeout where on topologies including few levels of PCIe
switching for x540 can run into an unexpected completion error.  Cleans
up the functionality in ixgbe_ndo_set_vf_vlan() in preparation for
future work.  Adds support for x550 MAC's to the driver.

v2:
 - Remove code comment in patch 01 of the series, based on feedback from
   David Liaght
 - Updated the "goto" to "break" statements in patch 06 of the series,
   based on feedback from Sergei Shtylyov
 - Initialized the variable err due to the possibility of use before
   being assigned a value in patch 07 of the series
 - Added patch "ixgbe: add helper function for setting RSS key in
   preparation of X550" since it is needed for the addition of X550 MAC
   support
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:26:42 -05:00
Sudip Mukherjee
8bca81d987 usbnet: smsc95xx: dereferencing NULL pointer
we were dereferencing dev to initialize pdata. but just after that we
have a BUG_ON(!dev). so we were basically dereferencing the pointer
first and then tesing it for NULL.

Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:24:08 -05:00
Joe Perches
d65c4e4e0a irda: Simplify IRDA logging macros
These are the same as net_<level>_ratelimited, so
use the more common style in the macro definition.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:21:59 -05:00
WANG Cong
d7480fd3b1 neigh: remove dynamic neigh table registration support
Currently there are only three neigh tables in the whole kernel:
arp table, ndisc table and decnet neigh table. What's more,
we don't support registering multiple tables per family.
Therefore we can just make these tables statically built-in.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 15:23:54 -05:00
Andy Shevchenko
b2e2f0c779 stmmac: split to core library and probe drivers
Instead of registering the platform and PCI drivers in one module let's move
necessary bits to where it belongs. During this procedure we convert the module
registration part to use module_*_driver() macros which makes code simplier.

>From now on the driver consists three parts: core library, PCI, and platform
drivers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 14:34:39 -05:00
Joe Perches
ba7a46f16d net: Convert LIMIT_NETDEBUG to net_dbg_ratelimited
Use the more common dynamic_debug capable net_dbg_ratelimited
and remove the LIMIT_NETDEBUG macro.

All messages are still ratelimited.

Some KERN_<LEVEL> uses are changed to KERN_DEBUG.

This may have some negative impact on messages that were
emitted at KERN_INFO that are not not enabled at all unless
DEBUG is defined or dynamic_debug is enabled.  Even so,
these messages are now _not_ emitted by default.

This also eliminates the use of the net_msg_warn sysctl
"/proc/sys/net/core/warnings".  For backward compatibility,
the sysctl is not removed, but it has no function.  The extern
declaration of net_msg_warn is removed from sock.h and made
static in net/core/sysctl_net_core.c

Miscellanea:

o Update the sysctl documentation
o Remove the embedded uses of pr_fmt
o Coalesce format fragments
o Realign arguments

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 14:10:31 -05:00
Denis Kirjanov
5b61c4db49 PPC: bpf_jit_comp: add SKF_AD_HATYPE instruction
Add BPF extension SKF_AD_HATYPE to ppc JIT to check
the hw type of the interface

Before:
[   57.723666] test_bpf: #20 LD_HATYPE
[   57.723675] BPF filter opcode 0020 (@0) unsupported
[   57.724168] 48 48 PASS

After:
[  103.053184] test_bpf: #20 LD_HATYPE 7 6 PASS

CC: Alexei Starovoitov<alexei.starovoitov@gmail.com>
CC: Daniel Borkmann<dborkman@redhat.com>
CC: Philippe Bergheaud<felix@linux.vnet.ibm.com>
Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org>

v2: address Alexei's comments
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:39:47 -05:00
David S. Miller
4083c8056d Merge branch 'net_next_ovs' of git://git.kernel.org/pub/scm/linux/kernel/git/pshelar/openvswitch
Pravin B Shelar says:

====================
Open vSwitch

Following batch of patches brings feature parity between upstream
ovs and out of tree ovs module.

Two features are added, first adds support to export egress
tunnel information for a packet. This is used to improve
visibility in network traffic. Second feature allows userspace
vswitchd process to probe ovs module features. Other patches
are optimization and code cleanup.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:32:25 -05:00
Joe Perches
a2ae6007a4 dsa: Use netdev_<level> instead of printk
Neaten and standardize the logging output.

Other miscellanea:

o Use pr_notice_once instead of a guard flag.
o Convert existing pr_<level> uses too.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:30:57 -05:00
David S. Miller
008e81656c Merge branch 'mlx4-next'
Or Gerlitz says:

====================
mlx4: Add CHECKSUM_COMPLETE support

These patches from Shani, Matan and myself add support for
CHECKSUM_COMPLETE reporting on non TCP/UDP packets such as
GRE and ICMP. I'd like to deeply thank Jerry Chu for his
innovation and support in that effort.

Based on the feedback from Eric and Ido Shamay, in V2 we dropped
the patch which removed the calls to napi_gro_frags() and added
a patch which makes the RX code to go through that path
regardless of the checksum status.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:20:07 -05:00
Shani Michaeli
f8c6455bb0 net/mlx4_en: Extend checksum offloading by CHECKSUM COMPLETE
When processing received traffic, pass CHECKSUM_COMPLETE status to the
stack, with calculated checksum for non TCP/UDP packets (such
as GRE or ICMP).

Although the stack expects checksum which doesn't include the pseudo
header, the HW adds it. To address that, we are subtracting the pseudo
header checksum from the checksum value provided by the HW.

In the IPv6 case, we also compute/add the IP header checksum which
is not added by the HW for such packets.

Cc: Jerry Chu <hkchu@google.com>
Signed-off-by: Shani Michaeli <shanim@mellanox.com>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:20:02 -05:00
Shani Michaeli
dd65beac48 net/mlx4_en: Extend usage of napi_gro_frags
We can call napi_gro_frags for all the received traffic regardless
of the checksum status. Specifically, received packets whose status
is CHECKSUM_NONE (and soon to be added CHECKSUM_COMPLETE)
are eligible for napi_gro_frags as well.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Shani Michaeli <shanim@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:20:02 -05:00
David S. Miller
b00394c007 Merge branch 'so_incoming_cpu'
Eric Dumazet says:

====================
net: SO_INCOMING_CPU support

SO_INCOMING_CPU socket option (read by getsockopt()) provides
an alternative to RPS/RFS for high performance servers using
multi queues NIC.

TCP should use sk_mark_napi_id() for established sockets only.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:00:11 -05:00
Eric Dumazet
2c8c56e15d net: introduce SO_INCOMING_CPU
Alternative to RPS/RFS is to use hardware support for multiple
queues.

Then split a set of million of sockets into worker threads, each
one using epoll() to manage events on its own socket pool.

Ideally, we want one thread per RX/TX queue/cpu, but we have no way to
know after accept() or connect() on which queue/cpu a socket is managed.

We normally use one cpu per RX queue (IRQ smp_affinity being properly
set), so remembering on socket structure which cpu delivered last packet
is enough to solve the problem.

After accept(), connect(), or even file descriptor passing around
processes, applications can use :

 int cpu;
 socklen_t len = sizeof(cpu);

 getsockopt(fd, SOL_SOCKET, SO_INCOMING_CPU, &cpu, &len);

And use this information to put the socket into the right silo
for optimal performance, as all networking stack should run
on the appropriate cpu, without need to send IPI (RPS/RFS).

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:00:06 -05:00
Eric Dumazet
3d97379a67 tcp: move sk_mark_napi_id() at the right place
sk_mark_napi_id() is used to record for a flow napi id of incoming
packets for busypoll sake.
We should do this only on established flows, not on listeners.

This was 'working' by virtue of the socket cloning, but doing
this on SYN packets in unecessary cache line dirtying.

Even if we move sk_napi_id in the same cache line than sk_lock,
we are working to make SYN processing lockless, so it is desirable
to set sk_napi_id only for established flows.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:00:05 -05:00
Don Skidmore
d1b849b9e9 ixgbe: add helper function for setting RSS key in preparation of X550
Split off the setting of the RSS key into its own function.  This
will help when we add support for X550 which can have different
RSS keys per pool.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:43:23 -08:00
Don Skidmore
9a75a1ac77 ixgbe: Add new support for X550 MAC's
This patch will add in the new MAC defines and fit it into the switch
cases throughout the driver.  New functionality and enablement support will
be added in following patches.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:56 -08:00
Don Skidmore
8d697e7e54 ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function.
Move setting of drop enable to support function.  This not only makes the
code more readable but is also prep for following patches that add
additional MAC support.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:49 -08:00
Don Skidmore
2b509c0cd2 ixgbe: cleanup ixgbe_ndo_set_vf_vlan
Clean up functionality in ixgbe_ndo_set_vf_vlan that will simplify later
patches.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:36 -08:00
Don Skidmore
71bde60191 ixgbe: fix X540 Completion timeout
On topologies including few levels of PCIe switching X540 can run into an
unexpected completion error.  We get around this by waiting after enabling
loopback a sufficient amount of time until Tx Data Fetch is sent.  We then
poll the pending transaction bit to ensure we received the completion.  Only
then do we go on to clear the buffers.

Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:05:27 -08:00
Mitch Williams
cc0529271f i40evf: don't use more queues than CPUs
It's kind of silly to configure and attempt to use a bunch of queue
pairs when you're running on a single (virtual) CPU. Instead of
unconditionally configuring all of the queues that the PF gives us,
clamp the number of queue pairs to the number of CPUs.

Change-ID: I321714c9e15072ee76de8f95ab9a81f86ed347d1
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:02:00 -08:00
Mitch Williams
f8d4db35e8 i40evf: make early init processing more robust
In early init, if we get an unexpected message from the PF (such as link
status), we just kick an error back to the init task, causing it to
restart its state machine and delaying initialization.

Make the early init AQ message receive code more robust by handling
messages in a loop, and ignoring those that we aren't interested in.
This also gets rid of some scary log messages that really didn't
indicate a problem.

Change-ID: I620e8c72e49c49c665ef33eeab2425dd10e721cf
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:01:54 -08:00
Jesse Brandeburg
79442d38b3 i40e: clean up throttle rate code
The interrupt throttle rate minimum is actually 2us, so
fix that define and while we are there, remove some unused defines.

Change some strings in the function to be a bit less wrappy, and
express the correct limits.

Change-ID: I96829bbc77935e0b57c6f0fc1439fb4152b2960a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:01:48 -08:00
Shannon Nelson
215367171b i40e: don't do link_status or stats collection on every ARQ
The ARQ events cause a service_task execution, and we do a link_status
check and full stats gathering for each service_task.  However, when
there are a lot of ARQ events, such as when doing an NVM update, we end up
doing 10's if not 100's of these per second, thereby heavily abusing the
PCI bus and especially the Firmware.  This patch adds a check to keep the
service_task from running these periodic tasks more than once per second,
while still allowing quick action to service the events.

Change-ID: Iec7670c37bfae9791c43fec26df48aea7f70b33e
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 05:52:46 -08:00
Kamil Krawczyk
0db4e162e6 i40e: poll firmware slower
The code was polling the firmware tail register for completion every
10 microseconds, which is way faster than the firmware can respond.
This changes the poll interval to 1ms, which reduces polling CPU
utilization, and the number of times we loop.

The maximum delay is still 100ms.

Change-ID: I4bbfa6b66d802890baf8b4154061e55942b90958
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 05:44:16 -08:00
Eric Dumazet
2e1af7d74f mlx4: restore conditional call to napi_complete_done()
After commit 1a28817282 ("mlx4: use napi_complete_done()") we ended up
calling napi_complete_done() in the case NAPI poll consumed all its
budget.

This added extra interrupt pressure, this patch restores proper
behavior.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 1a28817282 ("mlx4: use napi_complete_done()")
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:09:03 -05:00
David S. Miller
d21385fada Merge branch 'sunvnet-next'
Sowmini Varadhan says:

====================
sunvnet: edge-case/race-conditions bug fixes

This patch series contains fixes for race-conditions in sunvnet,
that can encountered when there is a difference in latency between
producer and consumer.

Patch 1 addresses a case when the STOPPED LDC ack from a peer is
processed before vnet_start_xmit can finish updating the dr->prod
state.

Patch 2 fixes the edge-case when outgoing data and incoming
stopped-ack cross each other in flight.

Patch 3 adds a missing rcu_read_unlock(), found by code-inspection.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:05:43 -05:00