Commit Graph

10381 Commits

Author SHA1 Message Date
Srikanth Thokala
46aa27df88 net: axienet: Use devm_* calls
use devm_* calls

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:34:00 -04:00
Srikanth Thokala
95219aa538 net: axienet: Use pdev instead of op
Synchronize names with other drivers

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:59 -04:00
Michal Simek
850a7503b0 net: axienet: Fix comments blocks
There is rule for network drivers with comments blocks
which is newly checked by checkpatch.pl script.
Let's fix it.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:59 -04:00
Srikanth Thokala
c81a97b5ca net: axienet: Removed coding style errors and warnings
Removed checkpatch.pl errors and warnings.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Srikanth Thokala
d7cc3163e0 net: axienet: Support phy-less mode of operation
This patch adds proper checks to handle the PHY-less case.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Srikanth Thokala
f080a8c35d net: axienet: Handle jumbo frames for lesser frame sizes
In the current implementation, jumbo frames are supported only
for the frame sizes > 16K. This patch corrects this logic to
handle jumbo frames for lesser frame sizes (< 16K) ensuring jumbo frame
MTU is within the limit of max frame size configured in the h/w
design.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Peter Crosthwaite
80c775accd net: axienet: Service completion interrupts ASAP
The packet completion interrupts for TX and RX should be serviced before
the packets are consumed. This ensures against the degenerate case when a
new completion interrupt is raised after the handler has exited but before
the interrupts are cleared. In this case its possible for the ISR to clear
an unhandled interrupt (leading to potential deadlock).

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Tested-by: Jason Wu <huanyu@xilinx.com>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Peter Crosthwaite
38e96b35cd net: axienet: Handle 0 packet receive gracefully
The AXI-DMA rx-delay interrupt can sometimes be triggered
when there are 0 outstanding packets received. This is due
to the fact that the receive function will greedily consume
as many packets as possible on interrupt. So if two packets
(with a very particular timing) arrive in succession they
will each cause the rx-delay interrupt, but the first interrupt
will consume both packets.
This means the second interrupt is a 0 packet receive.

This is mostly OK, except that the tail pointer register is
updated unconditionally on receive. Currently the tail pointer
is always set to the current bd-ring descriptor under
the assumption that the hardware has moved onto the next
descriptor. What this means for length 0 recv is the current
descriptor that the hardware is potentially yet to use will
be marked as the tail. This causes the hardware to think
its run out of descriptors deadlocking the whole rx path.

Fixed by updating the tail pointer to the most recent
successfully consumed descriptor.

Reported-by: Wendy Liang <wendy.liang@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Tested-by: Jason Wu <huanyu@xilinx.com>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Srikanth Thokala
d1d372e8b7 net: axienet: Support for RGMII
This patch adds support for the RGMII. The h/w configuration
parameter C_PHY_TYPE, which represents the interface configured in
the design, is used to differentiate various interfaces supported
by AXI Ethernet.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:33:58 -04:00
Hariprasad Shenai
637d3e9973 cxgb4: Discard the packet if the length is greater than mtu
pktgen sends raw udp packets and bypasses most of the
linux networking stack. User can specify different packet sizes.
Hence we need to discard the packet if the length is greater than mtu

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:50 -04:00
Hariprasad Shenai
a3bfb6179c cxgb4: Move SGE Ingress DMA state monitor code to a new routine
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:50 -04:00
Hariprasad Shenai
982b81eb24 cxgb4: Add device node to ULD info
Adds device node to ULD info. Use the node info to alloc_ring() for ctrl
TX queues

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:50 -04:00
Hariprasad Shenai
b8b1ae990e cxgb4: Pass in a Congestion Channel Map to t4_sge_alloc_rxq()
Passes a Congestion Channel Map to t4_sge_alloc_rxq()
for the Ethernet RX Queues based on the MPS Buffer Group Map
of the TX Channel rather than just the TX Channel Map.
Also, in t4_sge_alloc_rxq() for T5, setting up the
Congestion Manager values of the new RX Ethernet Queue is
done by firmware now.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:49 -04:00
Hariprasad Shenai
145ef8a54e cxgb4: Enable congestion notification from SGE for IQs and FLs.
Also changed the name of t4_hw.c:get_mps_bg_map() to t4_get_mps_bg_map()
and make it an exported routine with a definition in cxgb4.h.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:49 -04:00
Hariprasad Shenai
1343299727 cxgb4: Make sure that Freelist size is larger than Egress Congestion Threshold
We need to make sure that the Free List Size, in pointers, is at
least 2 Egress Queue Units (8 pointers/each) larger than the SGE's Egress
Congestion Threshold (in pointers).

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-05 19:31:48 -04:00
Mark Rustad
a1e869de72 ixgbe: Use a signed type to hold error codes
Because error codes are negative, it only makes sense to
consistently use signed types when handling them. Also remove
some explicit comparisons with 0 on these variables.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 02:31:13 -07:00
Mark Rustad
cb2effe540 ixgbe: Release semaphore bits in the right order
The global semaphore bits should be released in the reverse of the
order that they were taken, so correct that.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 02:12:10 -07:00
Mark Rustad
ae14a1d8e1 ixgbe: Fix IOSF SB access issues
IOSF is the Intel On-chip System Fabric used in SOCs. IOSF SB is
the IOSF SideBand message interface. This patch serializes IOSF SB
access using both phy bits in the SWFW_SEMAPHORE register. It also
adds a helper function to wait for IOSF SB accesses to complete.
Use the new function to perform this wait before each access, as
specified in the datasheet, in addition to using it to wait for
IOSF SB read/write completion.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 01:40:50 -07:00
Jeff Kirsher
30544af548 e1000e: fix call to do_div() to use u64 arg
We were using s64 for lat_ns (latency nano-second value) since in
our calculations a negative value could be a resultant.  For negative
values, we then assign lat_ns to be zero, so the value passed to
do_div() was never negative, but do_div() expects the argument type
to be u64, so do a cast to resolve a compile warning seen on
PowerPC.

CC: Yanjiang Jin <yanjiang.jin@windriver.com>
CC: Yanir Lubetkin <yanirx.lubetkin@intel.com>
Reported-by: Yanjiang Jin <yanjiang.jin@windriver.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
2015-05-04 01:38:08 -07:00
Alexander Duyck
55e7fe5b9c e1000e: Do not allow CRC stripping to be disabled on 82579 w/ jumbo frames
The driver wasn't allowing jumbo frames to be
 enabled when CRC stripping was disabled, however it was allowing CRC
 stripping to be disabled while jumbo frames were enabled.  This fixes that by
 making it so that the NETIF_F_RXFCS flag cannot be set when jumbo frames are
 enabled on 82579 and newer parts.

Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 01:26:44 -07:00
Alexander Duyck
8084b86dcf e1000e: Cleanup handling of VLAN_HLEN as a part of max frame size
When the VLAN_HLEN was added to the calculation for the maximum frame size
there seems to have been a number of issues added to the driver.

The first issue is that in some cases the maximum frame size for a device
never really reached the actual maximum frame size as the VLAN header
length was not included the calculation for that value.  As a result some
parts only supported a maximum frame size of either 1496 in the case of
parts that didn't support jumbo frames, and 8996 in the case of the parts
that do.

The second issue is the fact that there were several checks that weren't
updated so as a result setting an MTU of 1500 was treated as enabling jumbo
frames as the calculated value was 1522 instead of 1518.  I have addressed
those by replacing ETH_FRAME_LEN with VLAN_ETH_FRAME_LEN where appropriate.

The final issue was the fact that lowering the MTU below 1500 would cause
the driver to allocate 2K buffers for the rings.  This is an old issue that
was fixed several years ago in igb/ixgbe and I am addressing now by just
replacing == with a <= so that we always just round up to 1522 for anything
that isn't a jumbo frame.

Fixes: c751a3d58c ("e1000e: Correctly include VLAN_HLEN when changing interface MTU")
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 01:20:30 -07:00
Jean Sacren
ac7c1c5af9 e100: don't initialize int object to zero
'err' will be overwritten so no need to initialize it to zero.

Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 01:18:06 -07:00
Todd Fujinaka
8cfb879d1b igb: simplify and clean up igb_enable_mas()
igb_enable_mas() should only be called for the 82575 and has no clear
return so changing it to void. Also simplify the odd conditional
expression.

Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-05-04 01:17:47 -07:00
françois romieu
3a5a883a8a via-rhine: close SMP transmit races.
7ab87ff4c7 ("via-rhine: move work from
irq handler to softirq and beyond") forgot to explicitely control the
lifespan of the tx_dirty and tx_cur pointers.

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:27 -04:00
françois romieu
e1efa87241 via-rhine: dma_wmb transmit barrier.
Follow the now usual transmit descriptor update path:
1. content change
2. dma_wmb
3. ownership change

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:27 -04:00
françois romieu
810f19bcb8 via-rhine: add consistent memory barrier in vlan receive code.
The NAPI receive path depends on desc->rx_status but it does not
enforce any explicit receive barrier.

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
françois romieu
62ca1ba020 via-rhine: kiss rx_head_desc goodbye.
The driver no longer produces holes in its receive ring so rx_head_desc
only duplicates cur_rx.

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
françois romieu
8709bb2c1e via-rhine: forbid holes in the receive descriptor ring.
Rationales:
- throttle work under memory pressure
- lower receive descriptor recycling latency for the network adapter
- lower the maintenance burden of uncommon paths

The patch is twofold:
- it fails early if the receive ring can't be completely initialized
  at dev->open() time
- it drops packets on the floor in the napi receive handler so as to
  keep the received ring full

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
françois romieu
4d1fd9c1d8 via-rhine: gotoize rhine_open error path.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
françois romieu
a21bb8bae1 via-rhine: allocate and map receive buffer in a single transaction
It's used to initialize the receive ring but it will actually shine when
the receive poll code is reworked.

Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
françois romieu
e45af49795 via-rhine: commit receive buffer address before descriptor status update.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 00:18:26 -04:00
David S. Miller
3715544750 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Merge net into net-next.

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-02 22:05:58 -04:00
Simon Horman
629161f649 net: rocker: Use ether_addr_equal
A small cleanup to make use of the ether_addr_equal helper.

Signed-off-by: Simon Horman <simon.horman@netronome.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-01 21:48:30 -04:00
Iyappan Subramanian
9dd3c79749 drivers: net: xgene: fix kbuild warnings
Fixed the following kbuild warnings:
1. unused variable 'of_id'
2. buffer overflow 'ring_cfg' 5 <= 5

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 22:30:07 -04:00
Markus Pargmann
e813bb2b95 net: fec: Fix RGMII-ID mode
RGMII-ID uses an internal delay within the transmitter or receiver. This
feature is phy specific. The rest of the communication is normal RGMII.

So the fec driver has to check for all RGMII modes, not only
'PHY_INTERFACE_MODE_RGMII'.

Signed-off-by: Markus Pargmann <mpa@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:48:53 -04:00
Ido Shamay
07841f9d94 net/mlx4_en: Schedule napi when RX buffers allocation fails
When system is out of memory, refilling of RX buffers fails while
the driver continue to pass the received packets to the kernel stack.
At some point, when all RX buffers deplete, driver may fall into a
sleep, and not recover when memory for new RX buffers is once again
availible. This is because hardware does not have valid descriptors,
so no interrupt will be generated for the driver to return to work
in napi context. Fix it by schedule the napi poll function from
stats_task delayed workqueue, as long as the allocations fail.

Signed-off-by: Ido Shamay <idos@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:47:50 -04:00
Tony Camuso
c232d8a8bb netxen_nic: use spin_[un]lock_bh around tx_clean_lock
While testing this driver with DEBUG_LOCKDEP and DEBUG_SPINLOCK
enabled did not produce any traces, it would be more prudent in the
case of tx_clean_lock to use spin_[un]lock_bh, since this lock is
manipulated in both the process and softirq contexts.

This patch was tested for functionality and regressions with netperf
and DEBUG_LOCKDEP and DEBUG_SPINLOCK enabled.

Signed-off-by: Tony Camuso <tcamuso@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:37:29 -04:00
Ivan Vecera
18824894db be2net: log link status
The driver unlike other drivers does not log link state changes. It's
better for an user when asynchronous link states are logged to the system
log.

v3: Changes from v2 discarded as "not necessary"

Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:36:22 -04:00
Thomas Falcon
9c7e8bc584 ibmveth: Add support for Large Receive Offload
Enables receiving large packets from other LPARs. These packets
have a -1 IP header checksum, so we must recalculate to have
a valid checksum.

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:33:46 -04:00
Thomas Falcon
92ec8279f5 ibmveth: Add GRO support
Cc: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:33:46 -04:00
Thomas Falcon
8641dd8579 ibmveth: Add support for TSO
Add support for TSO.  TSO is turned off by default and
must be enabled and configured by the user.  The driver
version number is increased so that users can be sure
that they are using ibmveth with TSO support.

Cc: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:33:45 -04:00
Thomas Falcon
cd7c7ec368 ibmveth: change rx buffer default allocation for CMO
This patch enables 64k rx buffer pools by default.  If Cooperative
Memory Overcommitment (CMO) is enabled, the number of 64k buffers
is reduced to save memory.

Cc: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:33:45 -04:00
David Ahern
17d5ceb6e4 net/mlx4_core: Fix unaligned accesses
Addresses the following kernel logs seen during boot:

Kernel unaligned access at TPC[100ee150] mlx4_QUERY_HCA+0x80/0x248 [mlx4_core]
Kernel unaligned access at TPC[100f071c] mlx4_QUERY_ADAPTER+0x100/0x12c [mlx4_core]
Kernel unaligned access at TPC[100f071c] mlx4_QUERY_ADAPTER+0x100/0x12c [mlx4_core]
Kernel unaligned access at TPC[100f071c] mlx4_QUERY_ADAPTER+0x100/0x12c [mlx4_core]
Kernel unaligned access at TPC[100f071c] mlx4_QUERY_ADAPTER+0x100/0x12c [mlx4_core]

Signed-off-by: David Ahern <david.ahern@oracle.com>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:26:30 -04:00
Benjamin Poirier
f94813f3c1 mlx4_en: Use correct loop cursor in error path.
Signed-off-by: Benjamin Poirier <bpoirier@suse.de>
Fixes: 9e311e7 ("net/mlx4_en: Use affinity hint")
Acked-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:25:14 -04:00
Iyappan Subramanian
561fea6dea drivers: net: xgene: Add SGMII based 1GbE support with ring manager v2
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:03:14 -04:00
Iyappan Subramanian
bc1b7c132a drivers: net: xgene: Add 10GbE support with ring manager v2
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:03:13 -04:00
Iyappan Subramanian
ed9b7da019 drivers: net: xgene: Add ring manager v2 functions
Adding ring manager v2 support for APM X-Gene ethernet driver.

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:03:13 -04:00
Iyappan Subramanian
81cefb81db drivers: net: xgene: Change ring manager to use function pointers
This is a preparatory patch for adding ethernet support for APM X-Gene
ethernet driver to work with ring manager v2.

Added xgene_ring_ops structure for storing chip specific ring manager
properties and functions.

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-30 16:03:13 -04:00
Hariprasad Shenai
7f0b8a56c9 cxgb4: Fix MC1 memory offset calculation
Commit 6559a7e829 ("cxgb4: Cleanup macros so they follow the same
style and look consistent") introduced a regression where reading MC1
memory in adapters where MC0 isn't present or MC0 size is not equal to MC1
size caused the adapter to crash due to incorrect computation of memoffset.
Fix is to read the size of MC0 instead of MC1 for offset calculation

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-29 15:50:16 -04:00
Yuval Mintz
12a8541d5c bnx2x: Delay during kdump load
In a kdump environment interfaces might be re-loaded without a proper
unload sequence in the previous running kernel.
bnx2x management FW and driver maintains a `pulse' that notifies the FW
that the driver is still up and running.

Driver load on the kdump kernel should be performed only after the pulse
has been out-of-sync long enough for the management FW to identify that
the driver has crashed, on which point it will perform some necessary
cleanup of the HW.

In today's distros kdump loading is quite fast, sometimes too fast for our
FW to get out-of-sync. This patch delays the bnx2x's probe during kdump
to allow a proper re-load on the kdump kernel.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-29 15:49:21 -04:00