When the IP stack passes SKBs the sfc driver puts them in 2 different TX
queues (called partners), one for checksummed and one for not checksummed.
If the SKB has xmit_more set the driver will delay pushing the work to the
NIC.
When later it does decide to push the buffers this patch ensures it also
pushes the partner queue, if that also has any delayed work. Before this
fix the work in the partner queue would be left for a long time and cause
a netdev watchdog.
Fixes: 70b33fb ("sfc: add support for skb->xmit_more")
Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Martin Habets <mhabets@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The limit for BQL is updated each time we call
netdev_tx_completed_queue.
Without this patch the BQL limit was updated for every TX event we
see.
The issue was that this only updated the limit to handle the data
we complete in two events as the first event wouldn't show that
enough traffic had been processed between them.
This was OK when interrupt moderation was off but not when it was
on as more data had to be completed in a single interrupt.
The patch changes this so that we do report the completion to BQL
only when all the TX events in the interrupt have been processed.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
write_count and insert_count can wrap around, making > check invalid.
Fixes: 70b33fb0dd ("sfc: add support for
skb->xmit_more").
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't ring the doorbell, and don't do PIO. This will also prevent
TX Push, because there will be more than one buffer waiting when
the doorbell is rung.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert the normal transmit completion path from dev_kfree_skb_any()
to dev_consume_skb_any() to help keep dropped packet profiling
meaningful.
Signed-off-by: Rick Jones <rick.jones2@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
__iowrite64_copy() isn't quite the same as efx_memcpy_64(), but
it looks close enough:
- The length is in units of qwords not bytes
- It never byte-swaps, but that doesn't make a difference now as PIO
is only enabled for x86_64
- It doesn't include any memory barriers, but that's OK as there is a
barrier just before pushing the doorbell
- mlx4_en uses it for the same purpose
Compile-tested only.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement per channel software TX and RX packet counters
accessed as ethtool statistics.
This allows confirmation with MAC statistics.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes:ee45fd92c739
("sfc: Use TX PIO for sufficiently small packets")
The linux net driver uses memcpy_toio() in order to copy into
the PIO buffers.
Even on a 64bit machine this causes 32bit accesses to a write-
combined memory region.
There are hardware limitations that mean that only 64bit
naturally aligned accesses are safe in all cases.
Due to being write-combined memory region two 32bit accesses
may be coalesced to form a 64bit non 64bit aligned access.
Solution was to open-code the memory copy routines using pointers
and to only enable PIO for x86_64 machines.
Not tested on platforms other than x86_64 because this patch
disables the PIO feature on other platforms.
Compile-tested on x86 to ensure that works.
The WARN_ON_ONCE() code in the previous version of this patch
has been moved into the internal sfc debug driver as the
assertion was unnecessary in the upstream kernel code.
This bug fix applies to v3.13 and v3.14 stable branches.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is defined then NET_IP_ALIGN
will be defined as 0, so this macro is redundant.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit ee45fd92c7 ("sfc: Use TX PIO
for sufficiently small packets") introduced the following warning:
drivers/net/ethernet/sfc/tx.c: In function 'efx_enqueue_skb':
drivers/net/ethernet/sfc/tx.c:432:1: warning: label 'finish_packet' defined but not used
Stick the label inside the same #ifdef that the code which calls
it uses. Note that this is only seen for arch that do not set
ARCH_HAS_IOREMAP_WC, such as arm, mips, sparc, ..., as the others
enable the write combining code and hence use the label.
Cc: Jon Cooper <jcooper@solarflare.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Acked-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When using firmware assisted TSO, we use a single DMA mapping for
the linear area of a TSO skb.
We still have to segment the super-packet and insert a descriptor
containing the original headers before each segment of payload, so we
can unmap the linear area only after the last segment is completed.
The unmapping information for the linear area is therefore associated
with the last header descriptor.
We calculate the DMA address to unmap from using the map length and
the invariant that the end of the DMA mapping matches the end of
the data referenced by the last descriptor. But this invariant is
broken when there is TCP payload in the linear area.
Fix this by adding and using an explicit dma_offset field.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Sufficiently small linear packets can be copied into the PIO buffer
with a single call to memcpy_toio(). Non-linear packets require an
intermediate cache-line-sized buffer.
[bwh: I wrote the first version of this, but Jon did the hard work to
handle non-linear packets.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Try to allocate a segment of PIO buffer to each TX channel. If
allocation fails, log an error but continue.
PIO buffers must be mapped separately from the NIC registers, with
write-combining enabled. Where the host page size is 4K, we could
potentially map each VI's registers and PIO buffer separately.
However, this would add significant complexity, and we also need to
support architectures such as POWER which have a greater page size.
So make a single contiguous write-combining mapping after the
uncacheable mapping, aligned to the host page size, and link PIO
buffers there. Where necessary, allocate additional VIs within
the write-combining mapping purely for access to PIO buffers.
Link all TX buffers to TX queues and the additional VIs in
efx_ef10_dimension_resources() and in efx_ef10_init_nic() after
an MC reboot.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Segmentation remains in the driver, but we generate option descriptors
describing the required packet editing rather than making our own
copies.
Reduce tso_state::ipv4_id to 16 bits, so it doesn't overflow into the
TCP_FLAGS field of the option descriptor.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Update the dates for files that have been added to in 2012-2013.
Drop the 'Solarstorm' brand name that's still lingering here.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The TX path firmware for EF10 supports 'option descriptors' to control
offloads and various other features. Add a flag and field for these
in struct efx_tx_buffer, and don't treat them as DMA descriptors on
completion.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Add a counter for TX merged completion events.
This is implemented in the common TX path, because the NIC event
handlers only know how many descriptors were completed, not how many
packets.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Currently efx_stop_datapath() will try to flush our DMA queues (if DMA
is enabled), then finalise software and hardware state for each queue.
However, for EF10 we must ask the MC to finalise each queue, which
implicitly starts flushing it, and then wait for the flush events.
We therefore need to delegate more of this to the NIC type.
Combine all the hardware operations into a new NIC-type operation
efx_nic_type::fini_dmaq, and call this before tearing down the
software state and buffers for all the DMA queues.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Most call sites for efx_nic_alloc_buffer() are part of the probe or
reconfiguration paths and can allocate with GFP_KERNEL. A few others
should use GFP_NOIO (I think). Only one is in atomic context and
must use the current GFP_ATOMIC.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Add PTP IEEE-1588 support and make accesible via the PHC subsystem.
This work is based on prior code by Andrew Jackson
Signed-off-by: Stuart Hodgson <smhodgson@solarflare.com>
[bwh:
- Add byte order conversion in efx_ptp_send_times()
- Simplify conversion of PPS event times
- Add the built-in vs module check to CONFIG_SFC_PTP dependencies]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We only use tso_state::full_packet_space to calculate the IPv4 tot_len
or IPv6 payload_len, not to set tso_state::packet_space. Replace it
with an ip_base_len field holding the value of tot_len or payload_len
before including the TCP payload, which is much more useful when
constructing the new headers.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
TSO header buffers contain a control structure immediately followed by
the packet headers, and are kept on a free list when not in use. This
complicates buffer management and tends to result in cache read misses
when we recycle such buffers (particularly if DMA-coherent memory
requires caches to be disabled).
Replace the free list with a simple mapping by descriptor index. We
know that there is always a payload descriptor between any two
descriptors with TSO header buffers, so we can allocate only one
such buffer for each two descriptors.
While we're at it, use a standard error code for allocation failure,
not -1.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We now have a definite upper bound on the number of descriptors per
skb; use that to stop the queue when the next packet might not fit.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Add a flags field to struct efx_tx_buffer, replacing the
continuation and map_single booleans.
Since a single descriptor cannot be both a TSO header and the last
descriptor for an skb, unionise efx_tx_buffer::{skb,tsoh} and add
flags for validity of these fields.
Clear all flags in free buffers (whereas previously the continuation
flag would be set).
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Currently an skb requiring TSO may not fit within a minimum-size TX
queue. The TX queue selected for the skb may stall and trigger the TX
watchdog repeatedly (since the problem skb will be retried after the
TX reset). This issue is designated as CVE-2012-3412.
Set the maximum number of TSO segments for our devices to 100. This
should make no difference to behaviour unless the actual MSS is less
than about 700. Increase the minimum TX queue size accordingly to
allow for 2 worst-case skbs, so that there will definitely be space
to add an skb after we wake a queue.
To avoid invalidating existing configurations, change
efx_ethtool_set_ringparam() to fix up values that are too small rather
than returning -EINVAL.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is nothing in the VLAN driver or core VLAN support that
invalidates the TCP and IP header offsets.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
tso_state::packet_space is always set in tso_start_packet(); the
value set in tso_start() is not used, and is also incorrect.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The 'page size' for PCIe DMA, i.e. the alignment of boundaries at
which DMA must be broken, is 4KB. Name this value as EFX_PAGE_SIZE
and use it in efx_max_tx_len(). Redefine EFX_BUF_SIZE as
EFX_PAGE_SIZE since its value is also a result of that requirement,
and use it in efx_init_special_buffer().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The out-of-tree version of the sfc driver used to run a self-test on
each device before registering it. Although this was never included
in-tree, some functions have checks for this special case which is not
really possible.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The advantage of kcalloc is, that will prevent integer overflows which could
result from the multiplication of number of elements and size and it is also
a bit nicer to read.
The semantic patch that makes this change is available
in https://lkml.org/lkml/2011/11/25/107
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
As soon as skb is pushed to hardware, it can be completed and freed, so
we should not dereference skb anymore.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Changes to sfc to use byte queue limits.
Signed-off-by: Tom Herbert <therbert@google.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To ease skb->truesize sanitization, its better to be able to localize
all references to skb frags size.
Define accessors : skb_frag_size() to fetch frag size, and
skb_frag_size_{set|add|sub}() to manipulate it.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When I converted some drivers from pci_map_page to skb_frag_dma_map I
neglected to convert PCI_DMA_xDEVICE into DMA_x_DEVICE and
pci_dma_mapping_error into dma_mapping_error.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
Cc: Steve Hodgson <shodgson@solarflare.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Moves the Solarflare drivers into drivers/net/ethernet/sfc/ and
make the necessary Kconfig and Makefile changes.
CC: Steve Hodgson <shodgson@solarflare.com>
CC: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>