In this case, need to add element to device private to hold
original led state.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The correct usage should be "static inline void" instead of "static void inline"
Signed-off-by: G.Balaji <balajig81@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update 5709 mips firmware to 6.2.1a to fix iSCSI performance
regression. There was an unnecessary context read in the fast path
affecting performance.
Update bnx2 to 2.1.6.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add missing __rcu annotations and helpers.
minor : Fix some rcu_dereference() calls in macvtap
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On PPC for example, AER is not supported and we see unnecessary AER
error message without this patch:
bnx2 0003:01:00.1: pci_cleanup_aer_uncorrect_error_status failed 0xfffffffb
Reported-by: Breno Leitao <leitao@linux.vnet.ibm.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
TSO does not work if the VLAN tag is in the packet (non-accelerated).
We may be able to remove this restriction in future firmware.
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Quoting Ben Hutchings: we presumably won't be defining features that
can only be enabled on 64-bit architectures.
Occurences found by `grep -r` on net/, drivers/net, include/
[ Move features and vlan_features next to each other in
struct netdev, as per Eric Dumazet's suggestion -DaveM ]
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update MIPS firmware to 6.2.1, with improved small packet performance
in RSS mode, and iSCSI CID allocation bug fix on 5708.
Update driver version to 2.0.21.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When changing ring size, we free all memory including status block
memory. If we're in INTA mode and sharing IRQ, the IRQ handler can
be called and it will reference the NULL status block pointer.
Because of the lockless design of the IRQ handler, there is no simple
way to synchronize and prevent this. So we avoid this problem by
freeing the IRQ handler before freeing the status block memory.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael pointed out that bnx2_close() already cancels bp->reset_task
and thus it is guaranteed to be idle when bnx2_remove_one() is called.
Remove the unnecessary cancel_work_sync() in remove_one.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Using static const generally increases object text and decreases data size.
It also generally decreases overall object size.
Signed-off-by: Joe Perches <joe@perches.com>
flush_scheduled_work() is on its way out. This patch contains simple
conversions to replace flush_scheduled_work() usage with direct
cancels and flushes.
Directly cancel the used works on driver detach and flush them in
other cases.
The conversions are mostly straight forward and the only dangers are,
* Forgetting to cancel/flush one or more used works.
* Cancelling when a work should be flushed (ie. the work must be
executed once scheduled whether the driver is detaching or not).
I've gone over the changes multiple times but it would be much
appreciated if you can review with the above points in mind.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jay Cliburn <jcliburn@gmail.com>
Cc: Michael Chan <mchan@broadcom.com>
Cc: Divy Le Ray <divy@chelsio.com>
Cc: e1000-devel@lists.sourceforge.net
Cc: Vasanthy Kolluri <vkolluri@cisco.com>
Cc: Samuel Ortiz <samuel@sortiz.org>
Cc: Lennert Buytenhek <buytenh@wantstofly.org>
Cc: Andrew Gallatin <gallatin@myri.com>
Cc: Francois Romieu <romieu@fr.zoreil.com>
Cc: Ramkrishna Vepa <ramkrishna.vepa@exar.com>
Cc: Matt Carlson <mcarlson@broadcom.com>
Cc: David Brownell <dbrownell@users.sourceforge.net>
Cc: Shreyas Bhatewara <sbhatewara@vmware.com>
Cc: netdev@vger.kernel.org
In KVM passthrough mode, the driver may not have config access to
non-standard registers. The BNX2_PCICFG_MISC_CONFIG config register
access to setup mailbox swapping can be done using MMIO.
Update version to 2.0.20.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The 5709 chip requires the BNX2_MISC_NEW_CORE_CTL_DMA_ENABLE bit to be
cleared and polling for pending DMAs to complete before chip reset.
Without this step, we've seen NMIs during repeated resets of the chip.
Signed-off-by: Eddie Wai <waie@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use vzalloc() and vzalloc_node() in net drivers
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Jon Mason <jon.mason@exar.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some cards don't support changing vlan offloading settings. Make
Ethtool set_flags return -EINVAL in those cases.
Reported-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make the bnx2 driver use the new vlan accleration model.
Signed-off-by: Jesse Gross <jesse@nicira.com>
CC: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Many (but not all) drivers check to see whether there is a vlan
group configured before using a tag stored in the skb. There's
not much point in this check since it just throws away data that
should only be present in the expected circumstances. However,
it will soon be legal and expected to get a vlan tag when no
vlan group is configured, so remove this check from all drivers
to avoid dropping the tags.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To prevent unnecessary error message. pci_save_state() is also moved to
the end of ->probe() so that all PCI config, including AER state, will be
saved.
Update version to 2.0.18.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Improved flow control and simplified interface
- Use hardware RSS indirection table instead of the slower firmware-
based table
- Lower latency interrupt on 5709
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change "return (EXPR);" to "return EXPR;"
return is not a function, parentheses are not required.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
fresh skbs have ip_summed set to CHECKSUM_NONE (0)
We can avoid setting again skb->ip_summed to CHECKSUM_NONE in drivers.
Introduce skb_checksum_none_assert() helper so that we keep this
assertion documented in driver sources.
Change most occurrences of :
skb->ip_summed = CHECKSUM_NONE;
by :
skb_checksum_none_assert(skb);
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: John Feeney <jfeeney@redhat.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
smp_mb() inside bnx2_tx_avail() is used twice in the normal
bnx2_start_xmit() path (see illustration below). The full memory
barrier is only necessary during race conditions with tx completion.
We can speed up the tx path by replacing smp_mb() in bnx2_tx_avail()
with a compiler barrier. The compiler barrier is to force the
compiler to fetch the tx_prod and tx_cons from memory.
In the race condition between bnx2_start_xmit() and bnx2_tx_int(),
we have the following situation:
bnx2_start_xmit() bnx2_tx_int()
if (!bnx2_tx_avail())
BUG();
...
if (!bnx2_tx_avail())
netif_tx_stop_queue(); update_tx_index();
smp_mb(); smp_mb();
if (bnx2_tx_avail()) if (netif_tx_queue_stopped() &&
netif_tx_wake_queue(); bnx2_tx_avail())
With smp_mb() removed from bnx2_tx_avail(), we need to add smp_mb() to
bnx2_start_xmit() as shown above to properly order netif_tx_stop_queue()
and bnx2_tx_avail() to check the ring index. If it is not strictly
ordered, the tx queue can be stopped forever.
This improves performance by about 5% with 2 ports running bi-directional
64-byte packets.
Reviewed-by: Benjamin Li <benli@broadcom.com>
Reviewed-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on original patch by Breno Leitão <leitao@linux.vnet.ibm.com>.
Allocate the actual number of vectors and make use of fewer vectors
if pci_enable_msix() returns > 0. We must allocate one additional
vector for the cnic driver.
Cc: Breno Leitão <leitao@linux.vnet.ibm.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Reviewed-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We were using the wrong tx multicast counter instead of the rx multicast
counter.
Reported-by: Peter Snellman <peter.snellman@cinnober.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Reviewed-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use DMA API as PCI equivalents will be deprecated. This change also allow
to allocate with GFP_KERNEL in some places.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now core network is able to handle 64 bit netdevice stats on 32 bit
arches, we can provide them for bnx2, since hardware maintains some 64
bit counters.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
These config register values will be useful when the memory registers
are returning 0xffffffff which has been reported.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add skb->rxhash support for TCP packets only because the bnx2 RSS hash
does not hash UDP ports.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Minor change to use MSI-X even if there is only one CPU. This allows
the CNIC driver to always have a dedicated MSI-X vector to handle
iSCSI events, instead of sharing the MSI vector.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This removes dma_get_ops() prefetch optimization in bnx2.
bnx2 uses dma_get_ops() to see if dma_sync_single_for_cpu() is
noop. bnx2 does prefetch if it's noop.
But dma_get_ops() isn't available on all the architectures (only the
architectures that uses dma_map_ops struct have it). Using
dma_get_ops() in drivers leads to compilation breakage on many
architectures.
This patch removes dma_get_ops() and changes bnx2 to do prefetch on
all the architectures. This adds useless prefetch on non-coherent
architectures but this is harmless. It is also unlikely to cause the
performance drop.
[ Remove now unused local variable 'pdev' -DaveM ]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/bnx2.c: In function 'bnx2_disable_forced_2g5':
drivers/net/bnx2.c:1489: warning: 'bmcr' may be used uninitialized in this function
We fix it by checking return values from all bnx2_read_phy() and proceeding
to do read-modify-write only if the read operation is successful.
The related bnx2_enable_forced_2g5() is also fixed the same way.
Reported-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The regression is caused by:
commit 4327ba435a
bnx2: Fix netpoll crash.
If ->open() and ->close() are called multiple times, the same napi structs
will be added to dev->napi_list multiple times, corrupting the dev->napi_list.
This causes free_netdev() to hang during rmmod.
We fix this by calling netif_napi_del() during ->close().
Also, bnx2_init_napi() must not be in the __devinit section since it is
called by ->open().
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on original patch from Stanislaw Gruszka <sgruszka@redhat.com>.
Using netif_carrier_off() is better than updating all the ->trans_start
on all the tx queues.
netif_carrier_off() needs to be called after bnx2_disable_int_sync()
to guarantee no race conditions with the serdes timers that can
modify the carrier state.
If the chip or phy is reset, carrier will turn back on when we get the
link interrupt. If there is no reset, we need to turn carrier back on
in bnx2_netif_start(). Again, the phy_lock prevents race conditions with
the serdes timers.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
New firmware fixes a performance regression on small packets.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dump the correct MCP registers and add EMAC_RX_STATUS register during
NETDEV_WATCHDOG for debugging.
Signed-off-by: Eddie Wai <waie@broadcom.com>
Signed-off-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add prefetches of the skb and the next rx descriptor to speed up rx path.
Use prefetchw() for the skb [suggested by Eric Dumazet].
The rx descriptor is in skb->data which is mapped for streaming mode DMA.
Eric Dumazet pointed out that we should not prefetch the data before
dma_sync. So we prefetch only if dma_sync is no_op on the system.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
And turn on NETIF_F_GRO by default [requested by DaveM].
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>