e1000 allocates half a page per skb fragment. We must account
PAGE_SIZE/2 increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000 allocates a full page per skb fragment. We must account PAGE_SIZE
increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
bnx2 allocates a full page per fragment. We must account PAGE_SIZE
increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix skb truesize underestimations of this driver.
Each frag truesize is exactly rx_frag_size bytes. (2048 bytes per
default)
A driver should not use "sizeof(struct sk_buff)" at all.
Signed-off-by: Eric Dumazet <eric.dumazet>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change updates the driver version to 3.2.10.
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds VMDq loopback pf support for i350 devices. The patch
is necessary since the register that enabled loopback was moved and
renamed from DTXSWC to TXSWC.
Signed-off-by: "Akeem G. Abodunrin" <akeem.g.abodunrin@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
igb_update/validate_nvm_checksum_with_offset() should be static.
Also removes unneeded prototypes for the above functions.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
On i350 when traffic is looped back from a VF to the PF the value is byte
swapped from the normal format. In order to address this we need to add a
flag indicating that the ring will need to byte swap the loopback packets
prior to processing them.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since we mask interrupts in EIMS not in IMS there is no need to re-enable
mask bits in that register. As such we can remove the write to IMS from
the end of igb_msix_other.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change allows support for per packet timesync and global device reset
on the i350 adapter. These features were supported on both 82580 and i350
however it looks like several checks where not updated and as such the i350
support was not enabled.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change makes certain that one interrupt is always initialized in
igb_request_irq. In addition we drop the use of adapter->pdev and
instead just call pdev since we made a local copy of the pointer earlier in
the function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is mostly a drop of unnecessary pointer defines for q_vector when we
don't have issues with line width and don't have multiple references to
the pointer.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Correct a check for change in FCoE priority when IEEE mode DCB is in use.
In IEEE mode a different function has to be used to get the FCoE priority
mask. Also, the check for the mask assumed that only one priority was set.
In case there should be more than one, check just the bit.
These changes help avoid link flapping issues that can come up when IEEE
DCB is in use.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add 2 new counters to ethtool:
1. Count DDP allocation failure since we max the number of buffers
allowed in one DDP context.
2. Count DDP allocation failure since we max the number of buffers
allowed in one DDP context when we alloc an extra buffer.
Signed-off-by: Amir Hanania <amir.hanania@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
It is possible for a VF to set an invalid target DMA address in its
Tx/Rx descriptor buffer pointers. The workarounds in this patch
will guard against such an event and issue a VFLR to the VF in response.
The VFLR will shut down the VF until an administrator can take action
to investigate the event and correct the problem.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Breno Leitao <leitao@linux.vnet.ibm.com>
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe reported to me that right after a bring up of a r6040 interface
the ethtool output had no consistent output with respect to link duplex
and speed. Fix this by adding a missing phy_start call in r6040_up and
conversely a phy_stop call in r6040_down to properly initialize phy states.
Reported-by: Joe Chou <Joe.Chou@rdc.com.tw>
Signed-off-by: Florian Fainelli <florian@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Query port will now identify a 40G Ethernet speed.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Netdevice was being freed without being unregistered first if
mlx4_SET_PORT_general or mlx4_INIT_PORT failed.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Number of bits taken from mac table index in QP
calculation should be based on log_num_mac parameter.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixed a memory leak caused by missing iounmap when device
is being released.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: Sharon Cohen <sharonc@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Moderation is now done per ring and coalescing is enabled
by set_ring_param in ethtool.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixed a bug where ring size change caused insufficient memory
upon driver restart due to unreleased EQs.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Until now only RX rings used irq per ring
and TX used only one per port.
>From now on, both of them will use the
irq per ring while RX & TX ring[i] will
use the same irq.
Signed-off-by: Alexander Guller <alexg@mellanox.co.il>
Signed-off-by: Sharon Cohen <sharonc@mellanox.co.il>
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds support for Rx hashing.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change moves the Tx hang check into the ring flags.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch cleans up several issues with VLANs on igb after the recent
changes that were meant to leave the VLANs enabled/disable via the
netdev->features flags.
Specifically the Rx VLAN settings were being dropped after reset due to the
fact that they were not being restored correctly. In addition I removed
the IRQ disable/enable since those were in place to protect the setting of
vlgrp.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Instead of doing a byte swap on the staterr bits in the Rx descriptor we can
save ourselves a bit of space and some CPU time by instead just testing for
the various bits out of the Rx descriptor directly.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since the netdev now has its' own checksum flag to indicate if Rx checksum
is enabled we might as well use that instead of using the ring flag.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to cleanup some of the IVAR register configuration.
igb_assign_vector had become pretty large with multiple copies of the same
general code for setting the IVAR. This change consolidates most of that
code by adding the igb_write_ivar function which allows us just to compute
the index and offset and then use that information to setup the IVAR.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change moves information related to interrupt throttle rate
configuration into a separate q_vector sub-structure called a work
container. A similar change has already been made for ixgbe and this work
is based off of that.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change moves all of the ring flags into a single value. The advantage
to this is that there is one central area for all of these flags and they
can all make use of the set/test bit operations.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There are a number of places where we have values that are stored as u16
but are being converted to int unnecessarily. In order to avoid that we
should convert all variables that deal with the next_to_clean, next_to_use,
and count to u16 values.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to update the ring and vector allocations so that they
are per node instead of allocating everything on the node that
ifconfig/modprobe is called on. By doing this we can cut down
significantly on cross node traffic.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Instead of storing most of the data for the TX hot path in the stack until
we are ready to write the descriptor we can save ourselves some time and
effort by pushing the SKB, tx_flags, gso_size, bytecount, and protocol into
the first igb_tx_buffer since that is where we will end up putting it
anyway.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Per comments from Ben Hutchings on a previous patch, sweep the floors
a little removing unnecessary assignments of zero to fields of struct
ethtool_ringparam in driver code supporting ethtool -g.
Signed-off-by: Rick Jones <rick.jones2@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for reporting ring sizes via ethtool -g to the 8139cp driver.
Signed-off-by: Rick Jones <rick.jones2@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change will combine the writes of tx_buffer_info and the Tx data
descriptors into a single function. The advantage of this is that we can
avoid needless memory reads from the buffer info struct and speed things up
by keeping the accesses to the local registers.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to combine all of the TX flags fields into one u32
flags field so that it can be stored into the tx_buffer_info structure.
This includes the time stamp flag as well as mapped_as_page flag info.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to cleanup the protocol handling in the transmit path
so that it correctly offloads software VLAN tagged frames.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to improve the readability of the driver by separating
out the cmd_type configuration and the olinfo configuration into their own
functions. By doing this it is much easier to determine which ingredients
go into setting up these to portions of the descriptor.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change converts two tx_buffer_info index values into pointers. The
advantage to this is that we reduce unnecessary computations and in the case
of next_to_watch we get an added bonus of the value being able to provide
additional information as a NULL value indicates it is unset versus a 0 not
having any meaning for the index value.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch is meant to simplify the transmit path by reducing the overhead
for creating a transmit context descriptor. The current implementation is
split with igb_tso and igb_tx_csum doing two separate implementations on
how to setup the tx_buffer_info structure and the tx_desc. By combining
them it is possible to reduce code and simplify things since now only one
function will create context descriptors.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In order to be able to improve the performance of the TX path it has been
necessary to add addition info to the tx_buffer_info structure. However a
side effect is that the structure has gotten larger and this in turn has
also increased the size of the RX buffer info structure. In order to avoid
this in the future I am splitting the single buffer_info structure into two
separate ones and instead I will join them by making the buffer_info
pointer in the ring a union of the two.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>