Commit Graph

73 Commits

Author SHA1 Message Date
Linus Torvalds
7c225c69f8 Merge branch 'akpm' (patches from Andrew)
Merge updates from Andrew Morton:

 - a few misc bits

 - ocfs2 updates

 - almost all of MM

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (131 commits)
  memory hotplug: fix comments when adding section
  mm: make alloc_node_mem_map a void call if we don't have CONFIG_FLAT_NODE_MEM_MAP
  mm: simplify nodemask printing
  mm,oom_reaper: remove pointless kthread_run() error check
  mm/page_ext.c: check if page_ext is not prepared
  writeback: remove unused function parameter
  mm: do not rely on preempt_count in print_vma_addr
  mm, sparse: do not swamp log with huge vmemmap allocation failures
  mm/hmm: remove redundant variable align_end
  mm/list_lru.c: mark expected switch fall-through
  mm/shmem.c: mark expected switch fall-through
  mm/page_alloc.c: broken deferred calculation
  mm: don't warn about allocations which stall for too long
  fs: fuse: account fuse_inode slab memory as reclaimable
  mm, page_alloc: fix potential false positive in __zone_watermark_ok
  mm: mlock: remove lru_add_drain_all()
  mm, sysctl: make NUMA stats configurable
  shmem: convert shmem_init_inodecache() to void
  Unify migrate_pages and move_pages access checks
  mm, pagevec: rename pagevec drained field
  ...
2017-11-15 19:42:40 -08:00
Mel Gorman
453f85d43f mm: remove __GFP_COLD
As the page free path makes no distinction between cache hot and cold
pages, there is no real useful ordering of pages in the free list that
allocation requests can take advantage of.  Juding from the users of
__GFP_COLD, it is likely that a number of them are the result of copying
other sites instead of actually measuring the impact.  Remove the
__GFP_COLD parameter which simplifies a number of paths in the page
allocator.

This is potentially controversial but bear in mind that the size of the
per-cpu pagelists versus modern cache sizes means that the whole per-cpu
list can often fit in the L3 cache.  Hence, there is only a potential
benefit for microbenchmarks that alloc/free pages in a tight loop.  It's
even worse when THP is taken into account which has little or no chance
of getting a cache-hot page as the per-cpu list is bypassed and the
zeroing of multiple pages will thrash the cache anyway.

The truncate microbenchmarks are not shown as this patch affects the
allocation path and not the free path.  A page fault microbenchmark was
tested but it showed no sigificant difference which is not surprising
given that the __GFP_COLD branches are a miniscule percentage of the
fault path.

Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-15 18:21:06 -08:00
Kees Cook
7aa1402e2e net: ethernet/sfc: Convert timers to use timer_setup()
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.

Cc: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
Cc: Edward Cree <ecree@solarflare.com>
Cc: Bert Kenward <bkenward@solarflare.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-25 12:57:33 +09:00
Jon Cooper
da50ae2eae sfc: set csum_level for encapsulated packets
Set the csum_level for encapsulated packets where the encapsulation
 type, l3 class and l4 class are sets that need it.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-09 16:47:53 -05:00
Eric Dumazet
e7fe949126 sfc: get rid of custom busy polling code
In linux-4.5, busy polling was implemented in core
NAPI stack, meaning that all custom implementation can
be removed from drivers.

Not only we remove lot's of tricky code, we also remove
one lock operation in fast path.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Edward Cree <ecree@solarflare.com>
Cc: Bert Kenward <bkenward@solarflare.com>
Acked-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-03 09:56:50 -05:00
Edward Cree
e01b16a7e2 sfc: remove EFX_BUG_ON_PARANOID, use EFX_WARN_ON_[ONCE_]PARANOID instead
Logically, EFX_BUG_ON_PARANOID can never be correct.  For, BUG_ON should
 only be used if it is not possible to continue without potential harm;
 and since the non-DEBUG driver will continue regardless (as the BUG_ON is
 compiled out), clearly the BUG_ON cannot be needed in the DEBUG driver.
So, replace every EFX_BUG_ON_PARANOID with either an EFX_WARN_ON_PARANOID
 or the newly defined EFX_WARN_ON_ONCE_PARANOID.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-12-03 16:11:00 -05:00
Edward Cree
5a6681e22c sfc: separate out SFC4000 ("Falcon") support into new sfc-falcon driver
Rationale: The differences between Falcon and Siena are in many ways larger
 than those between Siena and EF10 (despite Siena being nominally "Falcon-
 architecture"); for instance, Falcon has no MCPU, so there is no MCDI.
 Removing Falcon support from the sfc driver should simplify the latter,
 and avoid the possibility of Falcon support being broken by changes to sfc
 (which are rarely if ever tested on Falcon, it being end-of-lifed hardware).

The sfc-falcon driver created in this changeset is essentially a copy of the
 sfc driver, but with Siena- and EF10-specific code, including MCDI, removed
 and with the "efx_" identifier prefix changed to "ef4_" (for "EFX 4000-
 series") to avoid collisions when both drivers are built-in.

This changeset removes Falcon from the sfc driver's PCI ID table; then in
 sfc I've removed obvious Falcon-related code: I removed the Falcon NIC
 functions, Falcon PHY code, and EFX_REV_FALCON_*, then fixed up everything
 that referenced them.

Also, increment minor version of both drivers (to 4.1).

For now, CONFIG_SFC selects CONFIG_SFC_FALCON, so that updating old configs
 doesn't cause Falcon support to disappear; but that should be undone at
 some point in the future.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-30 10:16:58 -05:00
Jon Cooper
faf8dcc12c sfc: Track RPS flow IDs per channel instead of per function
Otherwise we get confused when two flows on different channels get the
 same flow ID.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-31 20:30:25 -07:00
Edward Cree
68bb399e65 sfc: use flow dissector helpers for aRFS
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-29 22:38:55 -07:00
Eric Dumazet
93f93a4404 net: move skb_mark_napi_id() into core networking stack
We would like to automatically provide busy polling support
to all NAPI drivers, without them having to implement anything.

skb_mark_napi_id() can be called from napi_gro_receive() and
napi_get_frags().

Few drivers are still calling skb_mark_napi_id() because
they use netif_receive_skb(). They should eventually call
napi_gro_receive() instead. I will leave this to drivers
maintainers.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-11-18 16:17:41 -05:00
Daniel Pieczko
9eb0a5d190 sfc: free multiple Rx buffers when required
When Rx packet data must be dropped, all the buffers
associated with that Rx packet must be freed. Extend
and rename efx_free_rx_buffer() to efx_free_rx_buffers()
and loop through all the fragments.
By doing so this patch fixes a possible memory leak.

Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-31 17:36:20 -07:00
Alexandre Rames
36763266bb sfc: Add support for busy polling
This patch adds the sfc driver code for implementing busy polling.
It adds ndo_busy_poll method and locking between it and napi poll.
It also adds each napi to the napi_hash right after netif_napi_add().

Uses efx_start_eventq and efx_stop_eventq in the self tests.

Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-22 19:55:20 -07:00
Andrew Rybchenko
8ccf3800db sfc: Add per-queue statistics in ethtool
Implement per channel software TX and RX packet counters
accessed as ethtool statistics.

This allows confirmation with MAC statistics.

Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-17 16:48:36 -07:00
Edward Cree
e4d112e4f9 sfc: add extra RX drop counters for nodesc_trunc and noskb_drop
Added a counter rx_noskb_drop for failure to allocate an skb.
Summed the per-channel rx_nodesc_trunc counters earlier so that they can
 be included in rx_dropped.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-15 22:53:34 -07:00
Tom Herbert
c7cb38af79 net: sfc calls skb_set_hash
Drivers should call skb_set_hash to set the hash and its type
in an skbuff.

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-18 15:00:53 -05:00
Jon Cooper
bd9a265db2 sfc: Add RX packet timestamping for EF10
The EF10 firmware can optionally insert RX timestamps in the packet
prefix.  These only include the clock minor value.  We must also
enable periodic time sync events on each event queue which provide
the high bits of the clock value.

[bwh: Combined and rebased several changes.
 Added the above description and some sanity checks for inline vs
 separate timestamps.
 Changed efx_rx_skb_attach_timestamp() to read the packet prefix
 from the skb head area.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-12-12 22:07:13 +00:00
Ben Hutchings
2ccd0b1925 sfc: Copy RX prefix into skb head area in efx_rx_mk_skb()
We can potentially pull the entire packet contents into the head area
and then free the page it was in.  In order to read an inline
timestamp safely, we need to copy the prefix into the head area as
well.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-12-12 22:07:12 +00:00
Jon Cooper
cce28794bc sfc: Make initial fill of RX descriptors synchronous
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-12-12 22:06:50 +00:00
Andrew Rybchenko
2ec030144f sfc: RX buffer allocation takes prefix size into account in IP header alignment
rx_prefix_size is 4-bytes aligned on Falcon/Siena (16 bytes), but it is equal
to 14 on EF10. So, it should be taken into account if arch requires IP header
to be 4-bytes aligned (via NET_IP_ALIGN).

Fixes: 8127d661e7 ('sfc: Add support for Solarflare SFC9100 family')
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-12-06 22:27:52 +00:00
Ben Hutchings
c47b2d9d56 sfc: Support ARFS for IPv6 flows
Extend efx_filter_rfs() to map TCP/IPv6 and UDP/IPv6 flows into
efx_filter_spec.  These are only supported on EF10; on Falcon and
Siena they will be rejected by efx_farch_filter_from_gen_spec().

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20 19:32:00 +01:00
Ben Hutchings
f7a6d2c442 sfc: Update copyright banners
Update the dates for files that have been added to in 2012-2013.
Drop the 'Solarstorm' brand name that's still lingering here.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-29 23:34:51 +01:00
Jon Cooper
e8c68c0a09 sfc: Prepare for RX scatter on EF10
RX DMA scatter is always enabled on EF10.  Adjust the common RX
completion handling to allow for this.

RX completion events on EF10 include the length used from a single
descriptor, not the cumulative length used.  Add a field to struct
efx_rx_queue to hold the cumulative length.

[bwh: Also fix a related comment]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-29 18:12:12 +01:00
Ben Hutchings
b883d0bd4a sfc: Document conditions for multicast replication vs filter replacement
Add the efx_filter_is_mc_recip() function to decide whether a filter
is for a multicast recipient and can coexist with other filters with
the same match values.  Update efx_filter_insert_filter() kernel-doc
to explain the conditions for this.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-29 18:12:07 +01:00
Ben Hutchings
3dced740c2 sfc: Add support for reading packet length from prefix
Define a flag for struct efx_rx_buffer and efx_rx_packet() that
indicates packet length must be read from the prefix.  If this
is set, read the length in __efx_rx_packet() (when the prefix
should have arrived in cache).

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-27 22:29:07 +01:00
Jon Cooper
43a3739d55 sfc: Generalise packet hash lookup to support EF10 RX prefix
EF10 uses an entirely different RX prefix format from Falcon-arch.
Extend struct efx_nic_type to describe this.

[bwh: Also replace the magic numbers used for the Falcon-arch RX prefix]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-27 22:28:24 +01:00
Ben Hutchings
add7247718 sfc: Make most filter operations NIC-type-specific
Aside from accelerated RFS, there is almost nothing that can be shared
between the filter table implementations for the Falcon architecture
and EF10.

Move the few shared functions into efx.c and rx.c and the rest into
farch.c.  Introduce efx_nic_type operations for the implementation and
inline wrapper functions that call these.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-22 19:25:57 +01:00
Ben Hutchings
e42c3d85af sfc: Refactor queue teardown sequence to allow for EF10 flush behaviour
Currently efx_stop_datapath() will try to flush our DMA queues (if DMA
is enabled), then finalise software and hardware state for each queue.
However, for EF10 we must ask the MC to finalise each queue, which
implicitly starts flushing it, and then wait for the flush events.
We therefore need to delegate more of this to the NIC type.

Combine all the hardware operations into a new NIC-type operation
efx_nic_type::fini_dmaq, and call this before tearing down the
software state and buffers for all the DMA queues.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-21 19:49:23 +01:00
Ben Hutchings
d8aec745dd sfc: Stop RX refill before flushing RX queues
rx_queue::enabled guards refill, so rename it to reflect that.  Clear
it at the start of the queue teardown process rather than waiting for
the RX queue to be flushed.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-21 19:48:54 +01:00
Ben Hutchings
734d4e159b sfc: Fix memory leak when discarding scattered packets
Commit 2768935a46 ('sfc: reuse pages to avoid DMA mapping/unmapping
costs') did not fully take account of DMA scattering which was
introduced immediately before.  If a received packet is invalid and
must be discarded, we only drop a reference to the first buffer's
page, but we need to drop a reference for each buffer the packet
used.

I think this bug was missed partly because efx_recycle_rx_buffers()
was not renamed and so no longer does what its name says.  It does not
change the state of buffers, but only prepares the underlying pages
for recycling.  Rename it accordingly.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-07-05 01:29:15 -07:00
Ben Hutchings
636d73da27 sfc: Improve test for IOMMU in use
The device::iommu_group field may be set even if no IOMMU is in use.
iommu_present() is still a better indicator, although it doesn't tell
us whether *our* device is affected.

Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-06-24 20:02:53 +01:00
Ben Hutchings
e79255de85 sfc: Do not pass non-TCP packets into GRO code
GRO can handle non-TCP packets and pass them up without coalescing,
but it has to do some extra work to parse the packet which we can
bypass using the hardware parse result.  (This condition yields a
false negative for TCP/IPv6 packets received by Falcon, but its
performance is already poor in that case due to lack of checksum
offload.)

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-06-24 20:00:32 +01:00
Jon Cooper
d4ef5b6f37 sfc: Increase size of RX SKB header area
This allows the SKB to hold the headers without reallocation more often.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-06-24 19:58:28 +01:00
Jon Cooper
c99dffc417 sfc: Enable RX checksum offload for packets not handled by GRO
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-06-24 19:58:27 +01:00
Ben Hutchings
950c54df1e sfc: Reduce RX scatter buffer size, and reduce alignment if appropriate
efx_start_datapath() asserts that we can fit 2 RX scatter buffers plus
a software structure, each appropriately aligned, into a single page.
Where L1_CACHE_BYTES == 256 and PAGE_SIZE == 4096, which is the case
on s390, this assertion fails.

The current scatter buffer size is also not a multiple of 64 or 128,
which are more common cache line sizes.  If we can make both the start
and end of a scatter buffer cache-aligned, this will reduce the need
for read-modify-write operations on inter- processor links.

Fix the alignment by reducing EFX_RX_USR_BUF_SIZE to 2048 - 256 ==
1792.  (We could use 2048 - L1_CACHE_BYTES, but EFX_RX_USR_BUF_SIZE
also affects user-level networking where a larger amount of
housekeeping data may be needed.  Although this version of the driver
does not support user-level networking, I prefer to keep scattering
behaviour consistent with the out-of-tree version.)

This still doesn't fix the s390 build because like most architectures
it has NET_IP_ALIGN == 2.  When NET_IP_ALIGN != 0 we cannot achieve
cache line alignment at either the start or end of a scatter buffer,
so there is actually no point in padding the buffers to a multiple of
the cache line size.  All we need is 4-byte alignment of the network
header, so do that.

Adjust the assertions accordingly.

Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-14 11:32:04 -07:00
Ben Hutchings
c14ff2ea2d sfc: Delete EFX_PAGE_IP_ALIGN, equivalent to NET_IP_ALIGN
The two architectures that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
(powerpc and x86) now both define NET_IP_ALIGN as 0, so there is no
need for this optimisation any more.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-14 11:32:04 -07:00
stephen hemminger
debd0034de sfc: make local functions static
Trivial sparse detected functions that should be static.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-17 14:26:40 -04:00
Daniel Pieczko
1648a23fa1 sfc: allocate more RX buffers per page
Allocating 2 buffers per page is insanely inefficient when MTU is 1500
and PAGE_SIZE is 64K (as it usually is on POWER).  Allocate as many as
we can fit, and choose the refill batch size at run-time so that we
still always use a whole page at once.

[bwh: Fix loop condition to allow for compound pages; rebase]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:15 +00:00
Ben Hutchings
179ea7f039 sfc: Replace efx_rx_is_last_buffer() with a flag
This condition is brittle and we have lots of flags to spare.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:14 +00:00
Daniel Pieczko
2768935a46 sfc: reuse pages to avoid DMA mapping/unmapping costs
On POWER systems, DMA mapping/unmapping operations are very expensive.
These changes reduce these costs by trying to reuse DMA mapped pages.

After all the buffers associated with a page have been processed and
passed up, the page is placed into a ring (if there is room).  For
each page that is required for a refill operation, a page in the ring
is examined to determine if its page count has fallen to 1, ie. the
kernel has released its reference to these packets.  If this is the
case, the page can be immediately added back into the RX descriptor
ring, without having to re-map it for DMA.

If the kernel is still holding a reference to this page, it is removed
from the ring and unmapped for DMA.  Then a new page, which can
immediately be used by RX buffers in the descriptor ring, is allocated
and DMA mapped.

The time a page needs to spend in the recycle ring before the kernel
has released its page references is based on the number of buffers
that use this page.  As large pages can hold more RX buffers, the RX
recycle ring can be shorter.  This reduces memory usage on POWER
systems, while maintaining the performance gain achieved by recycling
pages, following the driver change to pack more than two RX buffers
into large pages.

When an IOMMU is not present, the recycle ring can be small to reduce
memory usage, since DMA mapping operations are inexpensive.

With a small recycle ring, attempting to refill the descriptor queue
with more buffers than the equivalent size of the recycle ring could
ultimately lead to memory leaks if page entries in the recycle ring
were overwritten.  To prevent this, the check to see if the recycle
ring is full is changed to check if the next entry to be written is
NULL.

[bwh: Combine and rebase several commits so this is complete
 before the following buffer-packing changes.  Remove module
 parameter.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:13 +00:00
Ben Hutchings
85740cdf0b sfc: Enable RX DMA scattering where possible
Enable RX DMA scattering iff an RX buffer large enough for the current
MTU will not fit into a single page and the NIC supports DMA
scattering for kernel-mode RX queues.

On Falcon and Siena, the RX_USR_BUF_SIZE field is used as the DMA
limit for both all RX queues with scatter enabled.  Set it to 1824,
matching what Onload uses now.

Maintain a statistic for frames truncated due to lack of descriptors
(rx_nodesc_trunc).  This is distinct from rx_frm_trunc which may be
incremented when scattering is disabled and implies an over-length
frame.

Whenever an MTU change causes scattering to be turned on or off,
update filters that point to the PF queues, but leave others
unchanged, as VF drivers assume scattering is off.

Add n_frags parameters to various functions, and make them iterate:
- efx_rx_packet()
- efx_recycle_rx_buffers()
- efx_rx_mk_skb()
- efx_rx_deliver()

Make efx_handle_rx_event() responsible for updating
efx_rx_queue::removed_count.

Change the RX pipeline state to a starting ring index and number of
fragments, and make __efx_rx_packet() responsible for clearing it.

Based on earlier versions by David Riddoch and Jon Cooper.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:12 +00:00
Ben Hutchings
b74e3e8cd6 sfc: Update RX buffer address together with length
Adjust rx_buf->page_offset when we eat the RX hash prefix.  Remove
efx_rx_buf_offset(), which is now redundant.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:11 +00:00
Ben Hutchings
5036b7c7b9 sfc: Explicitly prefetch RX hash prefix, not just Ethernet heade
Currently we prefetch from the Ethernet header, but we will also read
the hash prefix.  In practice they should be in the same cache line
and this won't hurt, but it is still pointless to add on the hash
prefix size.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:10 +00:00
Ben Hutchings
b184f16b7f sfc: Replace efx_rx_buf_eh() with simpler efx_rx_buf_va()
efx_rx_buf_va() returns the virtual address of the current start of
the buffer.  The callers must add the hash prefix size themselves.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:09 +00:00
Ben Hutchings
ff734ef4bc sfc: Wrap __efx_rx_packet() with efx_rx_flush_packet()
The pipeline mechanism will need to change a bit for scattered
packets.  Add a wrapper to insulate efx_process_channel() from this.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:08 +00:00
Ben Hutchings
272baeeb6a sfc: Properly distinguish RX buffer and DMA lengths
Replace efx_nic::rx_buffer_len with efx_nic::rx_dma_len, the maximum
RX DMA length.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:22:06 +00:00
Alexandre Rames
97d48a10c6 sfc: Remove rx_alloc_method SKB
[bwh: Remove more dead code, and make efx_ptp_rx() pull the data it
 needs into the header area.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:21:57 +00:00
Ben Hutchings
4a74dc65e3 sfc: Allow efx_channel_type::receive_skb() to reject a packet
Instead of having efx_ptp_rx() call netif_receive_skb() for an invalid
PTP packet, make it return false for rejected packets and have
efx_rx_deliver() pass them up.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-07 20:21:54 +00:00
Ben Hutchings
c73e787a8d sfc: Correct efx_rx_buffer::page_offset when EFX_PAGE_IP_ALIGN != 0
RX DMA buffers start at an offset of EFX_PAGE_IP_ALIGN bytes from the
start of a cache line.  This offset obviously needs to be included in
the virtual address, but this was missed in commit b590ace09d
('sfc: Fix efx_rx_buf_offset() in the presence of swiotlb') since
EFX_PAGE_IP_ALIGN is equal to 0 on both x86 and powerpc.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-03-06 17:57:25 +00:00
Ben Hutchings
b590ace09d sfc: Fix efx_rx_buf_offset() in the presence of swiotlb
We assume that the mapping between DMA and virtual addresses is done
on whole pages, so we can find the page offset of an RX buffer using
the lower bits of the DMA address.  However, swiotlb maps in units of
2K, breaking this assumption.

Add an explicit page_offset field to struct efx_rx_buffer.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-02-26 14:57:16 +00:00
Ben Hutchings
3a68f19d7a sfc: Properly sync RX DMA buffer when it is not the last in the page
We may currently allocate two RX DMA buffers to a page, and only unmap
the page when the second is completed.  We do not sync the first RX
buffer to be completed; this can result in packet loss or corruption
if the last RX buffer completed in a NAPI poll is the first in a page
and is not DMA-coherent.  (In the middle of a NAPI poll, we will
handle the following RX completion and unmap the page *before* looking
at the content of the first buffer.)

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-02-26 14:55:49 +00:00