Commit Graph

740894 Commits

Author SHA1 Message Date
Atul Gupta
dd0bed1665 tls: support for Inline tls record
Facility to register Inline TLS drivers to net/tls. Setup
TLS_HW_RECORD prot to listen on offload device.

Cases handled
- Inline TLS device exists, setup prot for TLS_HW_RECORD
- Atleast one Inline TLS exists, sets TLS_HW_RECORD.
- If non-inline device establish connection, move to TLS_SW_TX

Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:37:32 -04:00
David S. Miller
d4069fe6fc Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:

====================
pull-request: bpf-next 2018-03-31

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) Add raw BPF tracepoint API in order to have a BPF program type that
   can access kernel internal arguments of the tracepoints in their
   raw form similar to kprobes based BPF programs. This infrastructure
   also adds a new BPF_RAW_TRACEPOINT_OPEN command to BPF syscall which
   returns an anon-inode backed fd for the tracepoint object that allows
   for automatic detach of the BPF program resp. unregistering of the
   tracepoint probe on fd release, from Alexei.

2) Add new BPF cgroup hooks at bind() and connect() entry in order to
   allow BPF programs to reject, inspect or modify user space passed
   struct sockaddr, and as well a hook at post bind time once the port
   has been allocated. They are used in FB's container management engine
   for implementing policy, replacing fragile LD_PRELOAD wrapper
   intercepting bind() and connect() calls that only works in limited
   scenarios like glibc based apps but not for other runtimes in
   containerized applications, from Andrey.

3) BPF_F_INGRESS flag support has been added to sockmap programs for
   their redirect helper call bringing it in line with cls_bpf based
   programs. Support is added for both variants of sockmap programs,
   meaning for tx ULP hooks as well as recv skb hooks, from John.

4) Various improvements on BPF side for the nfp driver, besides others
   this work adds BPF map update and delete helper call support from
   the datapath, JITing of 32 and 64 bit XADD instructions as well as
   offload support of bpf_get_prandom_u32() call. Initial implementation
   of nfp packet cache has been tackled that optimizes memory access
   (see merge commit for further details), from Jakub and Jiong.

5) Removal of struct bpf_verifier_env argument from the print_bpf_insn()
   API has been done in order to prepare to use print_bpf_insn() soon
   out of perf tool directly. This makes the print_bpf_insn() API more
   generic and pushes the env into private data. bpftool is adjusted
   as well with the print_bpf_insn() argument removal, from Jiri.

6) Couple of cleanups and prep work for the upcoming BTF (BPF Type
   Format). The latter will reuse the current BPF verifier log as
   well, thus bpf_verifier_log() is further generalized, from Martin.

7) For bpf_getsockopt() and bpf_setsockopt() helpers, IPv4 IP_TOS read
   and write support has been added in similar fashion to existing
   IPv6 IPV6_TCLASS socket option we already have, from Nikita.

8) Fixes in recent sockmap scatterlist API usage, which did not use
   sg_init_table() for initialization thus triggering a BUG_ON() in
   scatterlist API when CONFIG_DEBUG_SG was enabled. This adds and
   uses a small helper sg_init_marker() to properly handle the affected
   cases, from Prashant.

9) Let the BPF core follow IDR code convention and therefore use the
   idr_preload() and idr_preload_end() helpers, which would also help
   idr_alloc_cyclic() under GFP_ATOMIC to better succeed under memory
   pressure, from Shaohua.

10) Last but not least, a spelling fix in an error message for the
    BPF cookie UID helper under BPF sample code, from Colin.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:33:04 -04:00
David S. Miller
70ae7222c6 Merge branch 'inet-frags-bring-rhashtables-to-IP-defrag'
Eric Dumazet says:

====================
inet: frags: bring rhashtables to IP defrag

IP defrag processing is one of the remaining problematic layer in linux.

It uses static hash tables of 1024 buckets, and up to 128 items per bucket.

A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.

This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.

Then there is the problem of sharing this hash table for all netns.

It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.

Lookup is now using RCU, and 64bit hosts can now provision whatever amount
of memory needed to handle the expected workloads.

v2: Addressed Herbert and Kirill feedbacks
  (Use rhashtable_free_and_destroy(), and split the big patch into small units)

v3: Removed the extra add_frag_mem_limit(...) from inet_frag_create()
    Removed the refcount_inc_not_zero() call from inet_frags_free_cb(),
    as we can exploit del_timer() return value.

v4: kbuild robot feedback about one missing static (squashed)
    Additional patches :
      inet: frags: do not clone skb in ip_expire()
      ipv6: frags: rewrite ip6_expire_frag_queue()
      rhashtable: reorganize struct rhashtable layout
      inet: frags: reorganize struct netns_frags
      inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
      ipv6: frags: get rid of ip6frag_skb_cb/FRAG6_CB
      inet: frags: get rid of nf_ct_frag6_skb_cb/NFCT_FRAG6_CB
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:40 -04:00
Eric Dumazet
f2d1c724fc inet: frags: get rid of nf_ct_frag6_skb_cb/NFCT_FRAG6_CB
nf_ct_frag6_queue() uses skb->cb[] to store the fragment offset,
meaning that we could use two cache lines per skb when finding
the insertion point, if for some reason inet6_skb_parm size
is increased in the future.

By using skb->ip_defrag_offset instead of skb->cb[] we pack all the fields
in a single cache line, matching what we did for IPv4.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:40 -04:00
Eric Dumazet
219badfaad ipv6: frags: get rid of ip6frag_skb_cb/FRAG6_CB
ip6_frag_queue uses skb->cb[] to store the fragment offset, meaning that
we could use two cache lines per skb when finding the insertion point,
if for some reason inet6_skb_parm size is increased in the future.

By using skb->ip_defrag_offset instead of skb->cb[], we pack all
the fields in a single cache line, matching what we did for IPv4.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:40 -04:00
Eric Dumazet
bf66337140 inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately
this integer is currently in a different cache line than skb->next,
meaning that we use two cache lines per skb when finding the insertion point.

By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields
in a single cache line and save precious memory bandwidth.

Note that after the fast path added by Changli Gao in commit
d6bebca92c ("fragment: add fast path for in-order fragments")
this change wont help the fast path, since we still need
to access prev->len (2nd cache line), but will show great
benefits when slow path is entered, since we perform
a linear scan of a potentially long list.

Also, note that this potential long list is an attack vector,
we might consider also using an rb-tree there eventually.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:40 -04:00
Eric Dumazet
c2615cf5a7 inet: frags: reorganize struct netns_frags
Put the read-mostly fields in a separate cache line
at the beginning of struct netns_frags, to reduce
false sharing noticed in inet_frag_kill()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
e5d672a078 rhashtable: reorganize struct rhashtable layout
While under frags DDOS I noticed unfortunate false sharing between
@nelems and @params.automatic_shrinking

Move @nelems at the end of struct rhashtable so that first cache line
is shared between all cpus, because almost never dirtied.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
05c0b86b96 ipv6: frags: rewrite ip6_expire_frag_queue()
Make it similar to IPv4 ip_expire(), and release the lock
before calling icmp functions.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
1eec5d5670 inet: frags: do not clone skb in ip_expire()
An skb_clone() was added in commit ec4fbd6475 ("inet: frag: release
spinlock before calling icmp_send()")

While fixing the bug at that time, it also added a very high cost
for DDOS frags, as the ICMP rate limit is applied after this
expensive operation (skb_clone() + consume_skb(), implying memory
allocations, copy, and freeing)

We can use skb_get(head) here, all we want is to make sure skb wont
be freed by another cpu.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
3e67f106f6 inet: frags: break the 2GB limit for frags storage
Some users are willing to provision huge amounts of memory to be able
to perform reassembly reasonnably well under pressure.

Current memory tracking is using one atomic_t and integers.

Switch to atomic_long_t so that 64bit arches can use more than 2GB,
without any cost for 32bit arches.

Note that this patch avoids an overflow error, if high_thresh was set
to ~2GB, since this test in inet_frag_alloc() was never true :

if (... || frag_mem_limit(nf) > nf->high_thresh)

Tested:

$ echo 16000000000 >/proc/sys/net/ipv4/ipfrag_high_thresh

<frag DDOS>

$ grep FRAG /proc/net/sockstat
FRAG: inuse 14705885 memory 16000002880

$ nstat -n ; sleep 1 ; nstat | grep Reas
IpReasmReqds                    3317150            0.0
IpReasmFails                    3317112            0.0

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
2d44ed22e6 inet: frags: remove inet_frag_maybe_warn_overflow()
This function is obsolete, after rhashtable addition to inet defrag.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
399d1404be inet: frags: get rif of inet_frag_evicting()
This refactors ip_expire() since one indentation level is removed.

Note: in the future, we should try hard to avoid the skb_clone()
since this is a serious performance cost.
Under DDOS, the ICMP message wont be sent because of rate limits.

Fact that ip6_expire_frag_queue() does not use skb_clone() is
disturbing too. Presumably IPv6 should have the same
issue than the one we fixed in commit ec4fbd6475
("inet: frag: release spinlock before calling icmp_send()")

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
6befe4a78b inet: frags: remove some helpers
Remove sum_frag_mem_limit(), ip_frag_mem() & ip6_frag_mem()

Also since we use rhashtable we can bring back the number of fragments
in "grep FRAG /proc/net/sockstat /proc/net/sockstat6" that was
removed in commit 434d305405 ("inet: frag: don't account number
of fragment queues")

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
648700f76b inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.

It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)

A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.

This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.

Then there is the problem of sharing this hash table for all netns.

It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.

Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.

Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.

After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)

$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608

A followup patch will change the limits for 64bit arches.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
ae6da1f503 rhashtable: add schedule points
Rehashing and destroying large hash table takes a lot of time,
and happens in process context. It is safe to add cond_resched()
in rhashtable_rehash_table() and rhashtable_free_and_destroy()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:39 -04:00
Eric Dumazet
483a6e4fa0 inet: frags: refactor ipfrag_init()
We need to call inet_frags_init() before register_pernet_subsys(),
as a prereq for following patch ("inet: frags: use rhashtables for reassembly units")

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
Eric Dumazet
807f1844df inet: frags: refactor lowpan_net_frag_init()
We want to call lowpan_net_frag_init() earlier.
Similar to commit "inet: frags: refactor ipv6_frag_init()"

This is a prereq to "inet: frags: use rhashtables for reassembly units"

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
Eric Dumazet
5b975bab23 inet: frags: refactor ipv6_frag_init()
We want to call inet_frags_init() earlier.

This is a prereq to "inet: frags: use rhashtables for reassembly units"

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
Eric Dumazet
093ba72914 inet: frags: add a pointer to struct netns_frags
In order to simplify the API, add a pointer to struct inet_frags.
This will allow us to make things less complex.

These functions no longer have a struct inet_frags parameter :

inet_frag_destroy(struct inet_frag_queue *q  /*, struct inet_frags *f */)
inet_frag_put(struct inet_frag_queue *q /*, struct inet_frags *f */)
inet_frag_kill(struct inet_frag_queue *q /*, struct inet_frags *f */)
inet_frags_exit_net(struct netns_frags *nf /*, struct inet_frags *f */)
ip6_expire_frag_queue(struct net *net, struct frag_queue *fq)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
Eric Dumazet
787bea7748 inet: frags: change inet_frags_init_net() return value
We will soon initialize one rhashtable per struct netns_frags
in inet_frags_init_net().

This patch changes the return value to eventually propagate an
error.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
Eric Dumazet
c22af22cbd ipv6: frag: remove unused field
csum field in struct frag_queue is not used, remove it.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:25:38 -04:00
David S. Miller
5749d6af49 Merge branch 'bnxt_en-next'
Michael Chan says:

====================
bnxt_en: Update for net-next.

Misc. updates including updated firmware interface, some additional
port statistics, a new IRQ assignment scheme for the RDMA driver, support
for VF trust, and other changes and improvements for SRIOV.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
ec86f14ea5 bnxt_en: Add ULP calls to stop and restart IRQs.
When the driver needs to re-initailize the IRQ vectors, we make the
new ulp_irq_stop() call to tell the RDMA driver to disable and free
the IRQ vectors.  After IRQ vectors have been re-initailized, we
make the ulp_irq_restart() call to tell the RDMA driver that
IRQs can be restarted.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
fbcfc8e467 bnxt_en: Reserve completion rings and MSIX for bnxt_re RDMA driver.
Add additional logic to reserve completion rings for the bnxt_re driver
when it requests MSIX vectors.  The function bnxt_cp_rings_in_use()
will return the total number of completion rings used by both drivers
that need to be reserved.  If the network interface in up, we will
close and open the NIC to reserve the new set of completion rings and
re-initialize the vectors.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
4e41dc5deb bnxt_en: Refactor bnxt_need_reserve_rings().
Refactor bnxt_need_reserve_rings() slightly so that __bnxt_reserve_rings()
can call it and remove some duplicated code.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
e5811b8c09 bnxt_en: Add IRQ remapping logic.
Add remapping logic so that bnxt_en can use any arbitrary MSIX vectors.
This will allow the driver to reserve one range of MSIX vectors to be
used by both bnxt_en and bnxt_re.  bnxt_en can now skip over the MSIX
vectors used by bnxt_re.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
08654eb213 bnxt_en: Change IRQ assignment for RDMA driver.
In the current code, the range of MSIX vectors allocated for the RDMA
driver is disjoint from the network driver.  This creates a problem
for the new firmware ring reservation scheme.  The new scheme requires
the reserved completion rings/MSIX vectors to be in a contiguous
range.

Change the logic to allocate RDMA MSIX vectors to be contiguous with
the vectors used by bnxt_en on new firmware using the new scheme.
The new function bnxt_get_num_msix() calculates the exact number of
vectors needed by both drivers.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
9899bb59ff bnxt_en: Improve ring allocation logic.
Currently, the driver code makes some assumptions about the group index
and the map index of rings.  This makes the code more difficult to
understand and less flexible.

Improve it by adding the grp_idx and map_idx fields explicitly to the
bnxt_ring_struct as a union.  The grp_idx is initialized for each tx ring
and rx agg ring during init. time.  We do the same for the map_idx for
each cmpl ring.

The grp_idx ties the tx ring to the ring group.  The map_idx is the
doorbell index of the ring.  With this new infrastructure, we can change
the ring index mapping scheme easily in the future.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:20 -04:00
Michael Chan
845adfe40c bnxt_en: Improve valid bit checking in firmware response message.
When firmware sends a DMA response to the driver, the last byte of the
message will be set to 1 to indicate that the whole response is valid.
The driver waits for the message to be valid before reading the message.

The firmware spec allows these response messages to increase in
length by adding new fields to the end of these messages.  The
older spec's valid location may become a new field in a newer
spec.  To guarantee compatibility, the driver should zero the valid
byte before interpreting the entire message so that any new fields not
implemented by the older spec will be read as zero.

For messages that are forwarded to VFs, we need to set the length
and re-instate the valid bit so the VF will see the valid response.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Michael Chan
596f9d55fe bnxt_en: Improve resource accounting for SRIOV.
When VFs are created, the current code subtracts the maximum VF
resources from the PF's pool.  This under-estimates the resources
remaining in the PF pool.  Instead, we should subtract the minimum
VF resources.  The VF minimum resources are guaranteed to the VFs
and only these should be subtracted from the PF's pool.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Michael Chan
db4723b3cd bnxt_en: Check max_tx_scheduler_inputs value from firmware.
When checking for the maximum pre-set TX channels for ethtool -l, we
need to check the current max_tx_scheduler_inputs parameter from firmware.
This parameter specifies the max input for the internal QoS nodes currently
available to this function.  The function's TX rings will be capped by this
parameter.  By adding this logic, we provide a more accurate pre-set max
TX channels to the user.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Vasundhara Volam
00db3cba35 bnxt_en: Add extended port statistics support
Gather periodic extended port statistics, if the device is PF and
link is up.

Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Vasundhara Volam
699efed00d bnxt_en: Include additional hardware port statistics in ethtool -S.
Include additional hardware port statistics in ethtool -S, which
are useful for debugging.

Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Vasundhara Volam
746df13964 bnxt_en: Add support for ndo_set_vf_trust
Trusted VFs are allowed to modify MAC address, even when PF
has assigned one.

Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Scott Branden
2373d8d6a7 bnxt_en: fix clear flags in ethtool reset handling
Clear flags when reset command processed successfully for components
specified.

Fixes: 6502ad5963 ("bnxt_en: Add ETH_RESET_AP support")
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Michael Chan
abe93ad2e0 bnxt_en: Use a dedicated VNIC mode for RDMA.
If the RDMA driver is registered, use a new VNIC mode that allows
RDMA traffic to be seen on the netdev in promiscuous mode.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Michael Chan
1d3ef13dd4 bnxt_en: Adjust default rings for multi-port NICs.
Change the default ring logic to select default number of rings to be up to
8 per port if the default rings x NIC ports <= total CPUs.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Michael Chan
d4f52de02f bnxt_en: Update firmware interface to 1.9.1.15.
Minor changes, such as new extended port statistics.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:24:19 -04:00
Wei Yongjun
eeb0a2a526 vlan: vlan_hw_filter_capable() can be static
Fixes the following sparse warning:

net/8021q/vlan_core.c:168:6: warning:
 symbol 'vlan_hw_filter_capable' was not declared. Should it be static?

Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 23:20:48 -04:00
David S. Miller
8bde261e53 mlx5-updates-2018-03-30
This series contains updates to mlx5 core and mlx5e netdev drivers.
 The main highlight of this series is the RX optimizations for striding RQ path,
 introduced by Tariq.
 
 First Four patches are trivial misc cleanups.
  - Spelling mistake fix
  - Dead code removal
  - Warning messages
 
 RX optimizations for striding RQ:
 
 1) RX refactoring, cleanups and micro optimizations
    - MTU calculation simplifications, obsoletes some WQEs-to-packets translation
      functions and helps delete ~60 LOC.
    - Do not busy-wait a pending UMR completion.
    - post the new values of UMR WQE inline, instead of using a data pointer.
    - use pre-initialized structures to save calculations in datapath.
 
 2) Use linear SKB in Striding RQ "build_skb", (Using linear SKB has many advantages):
     - Saves a memcpy of the headers.
     - No page-boundary checks in datapath.
     - No filler CQEs.
     - Significantly smaller CQ.
     - SKB data continuously resides in linear part, and not split to
       small amount (linear part) and large amount (fragment).
       This saves datapath cycles in driver and improves utilization
       of SKB fragments in GRO.
     - The fragments of a resulting GRO SKB follow the IP forwarding
       assumption of equal-size fragments.
 
     implementation details:
     HW writes the packets to the beginning of a stride,
     i.e. does not keep headroom. To overcome this we make sure we can
     extend backwards and use the last bytes of stride i-1.
     Extra care is needed for stride 0 as it has no preceding stride.
     We make sure headroom bytes are available by shifting the buffer
     pointer passed to HW by headroom bytes.
 
     This configuration now becomes default, whenever capable.
     Of course, this implies turning LRO off.
 
     Performance testing:
     ConnectX-5, single core, single RX ring, default MTU.
 
     UDP packet rate, early drop in TC layer:
 
     --------------------------------------------
     | pkt size | before    | after     | ratio |
     --------------------------------------------
     | 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
     |  500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
     |   64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
     --------------------------------------------
 
     TCP streams: ~20% gain
 
 3) Support XDP over Striding RQ:
     Now that linear SKB is supported over Striding RQ,
     we can support XDP by setting stride size to PAGE_SIZE
     and headroom to XDP_PACKET_HEADROOM.
 
     Striding RQ is capable of a higher packet-rate than
     conventional RQ.
 
     Performance testing:
     ConnectX-5, 24 rings, default MTU.
     CQE compression ON (to reduce completions BW in PCI).
 
     XDP_DROP packet rate:
     --------------------------------------------------
     | pkt size | XDP rate   | 100GbE linerate | pct% |
     --------------------------------------------------
     |   64byte | 126.2 Mpps |      148.0 Mpps |  85% |
     |  128byte |  80.0 Mpps |       84.8 Mpps |  94% |
     |  256byte |  42.7 Mpps |       42.7 Mpps | 100% |
     |  512byte |  23.4 Mpps |       23.4 Mpps | 100% |
     --------------------------------------------------
 
 4) Remove mlx5 page_ref bulking in Striding RQ and use page_ref_inc only when needed.
    Without this bulking, we have:
     - no atomic ops on WQE allocation or free
     - one atomic op per SKB
     - In the default MTU configuration (1500, stride size is 2K),
       the non-bulking method execute 2 atomic ops as before
     - For larger MTUs with stride size of 4K, non-bulking method
       executes only a single op.
     - For XDP (stride size of 4K, no SKBs), non-bulking have no atomic ops per packet at all.
 
     Performance testing:
     ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
 
     Single core packet rate (64 bytes).
 
     Early drop in TC: no degradation.
 
     XDP_DROP:
     before: 14,270,188 pps
     after:  20,503,603 pps, 43% improvement.
 
 Thanks,
 saeed.
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJavs5fAAoJEEg/ir3gV/o+iXQIAJQ4jcYb5V3AEPqUeiTNOH2h
 e2yyXj2zXNTCl2cekmJriWfQjsA5YizaTNipHb1xR8pznAiIMGmiK5nr8idRY1Qh
 M/awuoxJszj8a+z3SxrL/ilgf4HF/89YEt+5MnU/2ihBC3EGG0UbJ6TAC0cXMzmG
 Xghi5omlCfsqQQkWooxPVSdRXERLsgzo5kjZ2Zpln/GJa0vVPmIV7ojoQQQzFCMf
 eEQzqqEeOk4rk8Z2/5fdsWYjwa2XLnvtUtRBKX/hxCd2zYrFpGUxkzsT/Mikeu+Z
 AZAJA4yfHs3dKS3T4CaKDBqhUVxdAsuecT9JlqzgLVEbmw7YypacrPT0TBBsJcI=
 =qnbR
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-updates-2018-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2018-03-30

This series contains updates to mlx5 core and mlx5e netdev drivers.
The main highlight of this series is the RX optimizations for striding RQ path,
introduced by Tariq.

First Four patches are trivial misc cleanups.
 - Spelling mistake fix
 - Dead code removal
 - Warning messages

RX optimizations for striding RQ:

1) RX refactoring, cleanups and micro optimizations
   - MTU calculation simplifications, obsoletes some WQEs-to-packets translation
     functions and helps delete ~60 LOC.
   - Do not busy-wait a pending UMR completion.
   - post the new values of UMR WQE inline, instead of using a data pointer.
   - use pre-initialized structures to save calculations in datapath.

2) Use linear SKB in Striding RQ "build_skb", (Using linear SKB has many advantages):
    - Saves a memcpy of the headers.
    - No page-boundary checks in datapath.
    - No filler CQEs.
    - Significantly smaller CQ.
    - SKB data continuously resides in linear part, and not split to
      small amount (linear part) and large amount (fragment).
      This saves datapath cycles in driver and improves utilization
      of SKB fragments in GRO.
    - The fragments of a resulting GRO SKB follow the IP forwarding
      assumption of equal-size fragments.

    implementation details:
    HW writes the packets to the beginning of a stride,
    i.e. does not keep headroom. To overcome this we make sure we can
    extend backwards and use the last bytes of stride i-1.
    Extra care is needed for stride 0 as it has no preceding stride.
    We make sure headroom bytes are available by shifting the buffer
    pointer passed to HW by headroom bytes.

    This configuration now becomes default, whenever capable.
    Of course, this implies turning LRO off.

    Performance testing:
    ConnectX-5, single core, single RX ring, default MTU.

    UDP packet rate, early drop in TC layer:

    --------------------------------------------
    | pkt size | before    | after     | ratio |
    --------------------------------------------
    | 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
    |  500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
    |   64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
    --------------------------------------------

    TCP streams: ~20% gain

3) Support XDP over Striding RQ:
    Now that linear SKB is supported over Striding RQ,
    we can support XDP by setting stride size to PAGE_SIZE
    and headroom to XDP_PACKET_HEADROOM.

    Striding RQ is capable of a higher packet-rate than
    conventional RQ.

    Performance testing:
    ConnectX-5, 24 rings, default MTU.
    CQE compression ON (to reduce completions BW in PCI).

    XDP_DROP packet rate:
    --------------------------------------------------
    | pkt size | XDP rate   | 100GbE linerate | pct% |
    --------------------------------------------------
    |   64byte | 126.2 Mpps |      148.0 Mpps |  85% |
    |  128byte |  80.0 Mpps |       84.8 Mpps |  94% |
    |  256byte |  42.7 Mpps |       42.7 Mpps | 100% |
    |  512byte |  23.4 Mpps |       23.4 Mpps | 100% |
    --------------------------------------------------

4) Remove mlx5 page_ref bulking in Striding RQ and use page_ref_inc only when needed.
   Without this bulking, we have:
    - no atomic ops on WQE allocation or free
    - one atomic op per SKB
    - In the default MTU configuration (1500, stride size is 2K),
      the non-bulking method execute 2 atomic ops as before
    - For larger MTUs with stride size of 4K, non-bulking method
      executes only a single op.
    - For XDP (stride size of 4K, no SKBs), non-bulking have no atomic ops per packet at all.

    Performance testing:
    ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.

    Single core packet rate (64 bytes).

    Early drop in TC: no degradation.

    XDP_DROP:
    before: 14,270,188 pps
    after:  20,503,603 pps, 43% improvement.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:31:43 -04:00
David S. Miller
e2e80c027f RxRPC development
-----BEGIN PGP SIGNATURE-----
 
 iQIVAwUAWr6a+fu3V2unywtrAQJDgQ/+Pdyv8EW7VtdPRNdePGJLh4WHVrCnSL64
 hKzHV3gEt8PK7sJi53lkbvTYJ3hpB/SC9Li0TdOvph4ngH8pUer7HiXW9g2zrcy+
 KsbhmWslMOH6uy+8f2yWoI0HmSz7XvzQATCKOJz5Lm7X0JfvSoKkhgn6VLxqG91C
 p3IAYr5735/mqZIqs+FCIP2jLIPtCH/Plv18F9w847dSS1qsH6twAsUVjzn9CtJv
 MHQ+Wo57tpoOwynnZO+CDLhZGdXJ5YIZ8wXNqu/EUaeAHXgqkpSd4IUZeguDpoJR
 YrkvMf8cQlcBDohHBBmIVz9OOIZ8A7WbygNf7UPS9DGYjNVL9vKxF89xwAKYA5Ky
 LYmUug8WYp2FIHxuWGomF15SrzbNNDfD/v7ZvjXmNxvJXXV+AbzNoXZifSAuyo2V
 2bgolw22PbZAMgRvBr5OjtxGD65lpCD00ZJWBgFrWn8l3hdK/40NJW5s3nQCShmm
 NCUAFV+nHvazfUNfcBAbVRPXCN8Pc0Cuspr6HJ3Wi4q5lnpJQY+hS+KFNizL6iyu
 1fNRcqBF8KnVM5bE5Y27o96W/RhNdFudYB+sAH5IEGRqqG5YjkwvPgILvXhTH2kI
 MDxHMFjRgvWijzLLPpBd9YAsPLddvE6WRlngqOrSp6ICP4o8Vmeb2mxgo4tbQBXu
 3dIL1fEfhME=
 =Lyzx
 -----END PGP SIGNATURE-----

Merge tag 'rxrpc-next-20180330' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

David Howells says:

====================
rxrpc: Fixes and more traces

Here are some patches that add some more tracepoints to AF_RXRPC and fix
some issues therein:

 (1) Fix the use of VERSION packets to keep firewall routes open.

 (2) Fix the incorrect current time usage in a tracepoint.

 (3) Fix Tx ring annotation corruption.

 (4) Fix accidental conversion of call-level abort into connection-level
     abort.

 (5) Fix calculation of resend time.

 (6) Remove a couple of unused variables.

 (7) Fix a bunch of checker warnings and an error.  Note that not all
     warnings can be quashed as checker doesn't seem to correctly handle
     seqlocks.

 (8) Fix a potential race between call destruction and socket/net
     destruction.

 (9) Add a tracepoint to track rxrpc_local refcounting.

(10) Fix an apparent leak of rxrpc_local objects.

(11) Add a tracepoint to track rxrpc_peer refcounting.

(12) Fix a leak of rxrpc_peer objects.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:29:12 -04:00
Haiyang Zhang
3be9b5fdc6 hv_netvsc: Clean up extra parameter from rndis_filter_receive_data()
The variables, msg and data, have the same value. This patch removes
the extra one.

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:27:45 -04:00
Joe Perches
49b44aa23e ethernet: hisilicon: hns: hns_dsaf_mac: Use generic eth_broadcast_addr
Rather than use an on-stack array to copy a broadcast address, use
the generic eth_broadcast_addr function to save a trivial amount of
object code.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:26:43 -04:00
David S. Miller
b3834acdd7 Merge branch 'net_rwsem-fixes'
Kirill Tkhai says:

====================
net_rwsem fixes

there is wext_netdev_notifier_call()->wireless_nlevent_flush()
netdevice notifier, which takes net_rwsem, so we can't take
net_rwsem in {,un}register_netdevice_notifier().

Since {,un}register_netdevice_notifier() is executed under
pernet_ops_rwsem, net_namespace_list can't change, while we
holding it, so there is no need net_rwsem in these functions [1/2].

The same is in [2/2]. We make callers of __rtnl_link_unregister()
take pernet_ops_rwsem, and close the race with setup_net()
and cleanup_net(), so __rtnl_link_unregister() does not need it.
This also fixes the problem of that __rtnl_link_unregister() does
not see initializing and exiting nets.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:24:58 -04:00
Kirill Tkhai
554873e517 net: Do not take net_rwsem in __rtnl_link_unregister()
This function calls call_netdevice_notifier(), which also
may take net_rwsem. So, we can't use net_rwsem here.

This patch makes callers of this functions take pernet_ops_rwsem,
like register_netdevice_notifier() does. This will protect
the modifications of net_namespace_list, and allows notifiers
to take it (they won't have to care about context).

Since __rtnl_link_unregister() is used on module load
and unload (which are not frequent operations), this looks
for me better, than make all call_netdevice_notifier()
always executing in "protected net_namespace_list" context.

Also, this fixes the problem we had a deal in 328fbe747a
"Close race between {un, }register_netdevice_notifier and ...",
and guarantees __rtnl_link_unregister() does not skip
exitting net.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:24:58 -04:00
Kirill Tkhai
fc1dd36992 net: Remove net_rwsem from {, un}register_netdevice_notifier()
These functions take net_rwsem, while wireless_nlevent_flush()
also takes it. But down_read() can't be taken recursive,
because of rw_semaphore design, which prevents it to be occupied
by only readers forever.

Since we take pernet_ops_rwsem in {,un}register_netdevice_notifier(),
net list can't change, so these down_read()/up_read() can be removed.

Fixes: f0b07bb151 "net: Introduce net_rwsem to protect net_namespace_list"
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:24:58 -04:00
Wei Yongjun
c679f6a26d net: hns3: remove unnecessary pci_set_drvdata() and devm_kfree()
There is no need for explicit calls of devm_kfree(), as the allocated
memory will be freed during driver's detach.

The driver core clears the driver data to NULL after device_release.
Thus, it is not needed to manually clear the device driver data to NULL.

So remove the unnecessary pci_set_drvdata() and devm_kfree().

Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:22:25 -04:00
David Ahern
ef81710258 netdevsim: Change nsim_devlink_setup to return error to caller
Change nsim_devlink_setup to return any error back to the caller and
update nsim_init to handle it.

Requested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:22:10 -04:00
David S. Miller
6851cf28db Merge branch 'tipc-slim-down-name-table'
Jon Maloy says:

====================
tipc: slim down name table

We clean up and improve the name binding table:

 - Replace the memory consuming 'sub_sequence/service range' array with
   an RB tree.
 - Introduce support for overlapping service sequences/ranges

 v2: #1: Fixed a missing initialization reported by David Miller
     #4: Obsoleted and replaced a few more macros to get a consistent
         terminology in the API.
     #5: Added new commit to fix a potential string overflow bug (it
         is still only in net-next) reported by Arnd Bergmann
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-31 22:19:59 -04:00