Commit Graph

797358 Commits

Author SHA1 Message Date
Samuel Mendoza-Jonas
cd09ab095c net/ncsi: Don't deselect package in suspend if active
When a package is deselected all channels of that package cease
communication. If there are other channels active on the package of the
suspended channel this will disable them as well, so only send a
deselect-package command if no other channels are active.

Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 21:09:49 -08:00
Samuel Mendoza-Jonas
8e13f70be0 net/ncsi: Probe single packages to avoid conflict
Currently the NCSI driver sends a select-package command to all possible
packages simultaneously to discover what packages are available. However
at this stage in the probe process the driver does not know if
hardware arbitration is available: if it isn't then this process could
cause collisions on the RMII bus when packages try to respond.

Update the probe loop to probe each package one by one, and once
complete check if HWA is universally supported.

Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 21:09:49 -08:00
Samuel Mendoza-Jonas
60ab49bfe4 net/ncsi: Don't enable all channels when HWA available
NCSI hardware arbitration allows multiple packages to be enabled at once
and share the same wiring. If the NCSI driver recognises that HWA is
available it unconditionally enables all packages and channels; but that
is a configuration decision rather than something required by HWA.
Additionally the current implementation will not failover on link events
which can cause connectivity to be lost unless the interface is manually
bounced.

Retain basic HWA support but remove the separate configuration path to
enable all channels, leaving this to be handled by a later
implementation.

Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 21:09:49 -08:00
Arjun Vynipadath
2391b0030e cxgb4: Remove SGE_HOST_PAGE_SIZE dependency on page size
The SGE Host Page Size has nothing to do with the actual
Host Page Size. It's the SGE's BAR2 Doorbell/GTS Page Size
for interpreting the SGE Ingress/Egress Queue per Page values.
Firmware reads all of these things and makes all the
subsequent changes necessary. The Host Driver uses the SGE
Host Page Size in order to properly calculate BAR2 Offsets.

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Arjun Vynipadath <arjun@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 20:36:25 -08:00
Yousuk Seung
e8bd8fca67 tcp: add SRTT to SCM_TIMESTAMPING_OPT_STATS
Add TCP_NLA_SRTT to SCM_TIMESTAMPING_OPT_STATS that reports the smoothed
round trip time in microseconds (tcp_sock.srtt_us >> 3).

Signed-off-by: Yousuk Seung <ysseung@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 20:34:36 -08:00
Stephen Hemminger
54e8cb7861 uapi/ethtool: fix spelling errors
Trivial spelling errors found by codespell.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 20:33:32 -08:00
David S. Miller
6f0271d929 tun: Adjust on-stack tun_page initialization.
Instead of constantly playing with the struct initializer
syntax trying to make gcc and CLang both happy, just clear
it out using memset().

>> drivers/net/tun.c:2503:42: warning: Using plain integer as NULL pointer

Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 16:53:46 -08:00
Jason Wang
f9e06c45cb tuntap: free XDP dropped packets in a batch
Thanks to the batched XDP buffs through msg_control. Instead of
calling put_page() for each page which involves a atomic operation,
let's batch them by record the last page that needs to be freed and
its refcnt count and free them in a batch.

Testpmd(virtio-user + vhost_net) + XDP_DROP shows 3.8% improvement.

Before: 4.71Mpps
After : 4.89Mpps

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 12:00:42 -08:00
Jason Wang
e4dab1e6ea vhost_net: mitigate page reference counting during page frag refill
We do a get_page() which involves a atomic operation. This patch tries
to mitigate a per packet atomic operation by maintaining a reference
bias which is initially USHRT_MAX. Each time a page is got, instead of
calling get_page() we decrease the bias and when we find it's time to
use a new page we will decrease the bias at one time through
__page_cache_drain_cache().

Testpmd(virtio_user + vhost_net) + XDP_DROP on TAP shows about 1.6%
improvement.

Before: 4.63Mpps
After:  4.71Mpps

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-17 12:00:42 -08:00
David S. Miller
b8b9618a4f Merge branch 'net-sched-gred-introduce-per-virtual-queue-attributes'
Jakub Kicinski says:

====================
net: sched: gred: introduce per-virtual queue attributes

This series updates the GRED Qdisc.  The Qdisc matches nfp offload very
well, but before we can offload it there are a number of improvements
to make.

First few patches add extack messages to the Qdisc and pass extack
to netlink validation.

Next a new netlink attribute group is added, to allow GRED to be
extended more easily.  Currently GRED passes C structures as attributes,
and even an array of C structs for virtual queue configuration.  User
space has hard coded the expected length of that array, so adding new
fields is not possible.

New two-level attribute group is added:

  [TCA_GRED_VQ_LIST]
    [TCA_GRED_VQ_ENTRY]
      [TCA_GRED_VQ_DP]
      [TCA_GRED_VQ_FLAGS]
      [TCA_GRED_VQ_STAT_*]
    [TCA_GRED_VQ_ENTRY]
      [TCA_GRED_VQ_DP]
      [TCA_GRED_VQ_FLAGS]
      [TCA_GRED_VQ_STAT_*]
    [TCA_GRED_VQ_ENTRY]
       ...

Statistics are dump only. Patch 4 switches the byte counts to be 64 bit,
and patch 5 introduces the new stats attributes for dump.  Patch 6
switches RED flags to be per-virtual queue, and patch 7 allows them
to be dumped and set at virtual queue granularity.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:52 -08:00
Jakub Kicinski
7211101502 net: sched: gred: allow manipulating per-DP RED flags
Allow users to set and dump RED flags (ECN enabled and harddrop)
on per-virtual queue basis.  Validation of attributes is split
from changes to make sure we won't have to undo previous operations
when we find out configuration is invalid.

The objective is to allow changing per-Qdisc parameters without
overwriting the per-vq configured flags.

Old user space will not pass the TCA_GRED_VQ_FLAGS attribute and
per-Qdisc flags will always get propagated to the virtual queues.

New user space which wants to make use of per-vq flags should set
per-Qdisc flags to 0 and then configure per-vq flags as it
sees fit.  Once per-vq flags are set per-Qdisc flags can't be
changed to non-zero.  Vice versa - if the per-Qdisc flags are
non-zero the TCA_GRED_VQ_FLAGS attribute has to either be omitted
or set to the same value as per-Qdisc flags.

Update per-Qdisc parameters:
per-Qdisc | per-VQ | result
        0 |      0 | all vq flags updated
	0 |  non-0 | error (vq flags in use)
    non-0 |      0 | -- impossible --
    non-0 |  non-0 | all vq flags updated

Update per-VQ state (flags parameter not specified):
   no change to flags

Update per-VQ state (flags parameter set):
per-Qdisc | per-VQ | result
        0 |   any  | per-vq flags updated
    non-0 |      0 | -- impossible --
    non-0 |  non-0 | error (per-Qdisc flags in use)

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
25fc198907 net: sched: gred: store red flags per virtual queue
Right now ECN marking and HARD drop (the common RED flags) can only
be configured for the entire Qdisc.  In preparation for per-vq flags
store the values in the virtual queue structure.  Setting per-vq
flags will only be allowed when no flags are set for the entire Qdisc.
For the new flags we will also make sure undefined bits are 0.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
80e22e961d net: sched: gred: provide a better structured dump and expose stats
Currently all GRED's virtual queue data is dumped in a single
array in a single attribute.  This makes it pretty much impossible
to add new fields.  In order to expose more detailed stats add a
new set of attributes.  We can now expose the 64 bit value of bytesin
and all the mark stats which were not part of the original design.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
9f5cd0c806 net: sched: gred: store bytesin as a 64 bit value
32 bit counters for bytes are not really going to last long in modern
world.  Make sch_gred count bytes on a 64 bit counter.  It will still
get truncated during dump but follow up patch will add set of new
stat dump attributes.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
4777be08b8 net: sched: gred: use extack to provide more details on configuration errors
Add extack messages to -EINVAL errors, to help users identify
their mistakes.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
79c59fe01e net: sched: gred: pass extack to nla_parse_nested()
In case netlink wants to provide parsing error pass extack
to nla_parse_nested().

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Jakub Kicinski
255f4803ec net: sched: gred: separate error and non-error path in gred_change()
We will soon want to add more code to the non-error path, separate
it from the error handling flow.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:08:51 -08:00
Paolo Abeni
9c549a6b05 selftests: add explicit test for multiple concurrent GRO sockets
This covers for proper accounting of encap needed static keys

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:03:20 -08:00
YueHaibing
b24b767fb1 isdn/hisax: remove set but not used variable 'total'
Fixes gcc '-Wunused-but-set-variable' warning:

drivers/isdn/hisax/hfc_pci.c:277:6: warning:
 variable ‘total’ set but not used [-Wunused-but-set-variable]

It never used since git history start.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:02:50 -08:00
Paolo Abeni
9c48060141 udp: fix jump label misuse
The commit 60fb9567bf ("udp: implement complete book-keeping for
encap_needed") introduced a severe misuse of jump label APIs, which
syzbot, as reported by Eric, was able to exploit.

When multiple sockets/process can concurrently request (and than
disable) the udp encap, we need to track the activation counter with
*_inc()/*_dec() jump label variants, or we can experience bad things
at disable time.

Fixes: 60fb9567bf ("udp: implement complete book-keeping for encap_needed")
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 23:01:56 -08:00
Jesus Sanchez-Palencia
37342bdaf5 etf: Drop all expired packets
Currently on dequeue() ETF only drops the first expired packet, which
causes a problem if the next packet is already expired. When this
happens, the watchdog will be configured with a time in the past, fire
straight way and the packet will finally be dropped once the dequeue()
function of the qdisc is called again.

We can save quite a few cycles and improve the overall behavior of the
qdisc if we drop all expired packets if the next packet is expired.
This should allow ETF to recover faster from bad situations. But
packet drops are still a very serious warning that the requirements
imposed on the system aren't reasonable.

This was inspired by how the implementation of hrtimers use the
rb_tree inside the kernel.

Signed-off-by: Jesus Sanchez-Palencia <jesus.s.palencia@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:39:34 -08:00
Jesus Sanchez-Palencia
cbeeb8efec etf: Split timersortedlist_erase()
This is just a refactor that will simplify the implementation of the
next patch in this series which will drop all expired packets on the
dequeue flow.

Signed-off-by: Jesus Sanchez-Palencia <jesus.s.palencia@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:39:34 -08:00
Jesus Sanchez-Palencia
09fd4860ea etf: Use cached rb_root
ETF's peek() operation is heavily used so use an rb_root_cached instead
and leverage rb_first_cached() which will run in O(1) instead of
O(log n).

Even if on 'timesortedlist_clear()' we could be using rb_erase(), we
choose to use rb_erase_cached(), because if in the future we allow
runtime changes to ETF parameters, and need to do a '_clear()', this
might cause some hard to debug issues.

Signed-off-by: Jesus Sanchez-Palencia <jesus.s.palencia@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:39:34 -08:00
Jesus Sanchez-Palencia
3fcbdaee3b etf: Cancel timer if there are no pending skbs
There is no point in firing the qdisc watchdog if there are no future
skbs pending in the queue and the watchdog had been set previously.

Signed-off-by: Jesus Sanchez-Palencia <jesus.s.palencia@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:39:34 -08:00
Yafang Shao
213d7767af tcp: clean up STATE_TRACE
Currently we can use bpf or tcp tracepoint to conveniently trace the tcp
state transition at the run time.
So we don't need to do this stuff at the compile time anymore.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:28:00 -08:00
David S. Miller
e119a369b0 Merge branch 'SMSC95xx-driver-updates'
Ben Dooks says:

====================
SMSC95xx driver updates (round 1)

This is a series of a few driver cleanups and some fixups of the code
for the SMSC95XX driver. There have been a few reviews, and the issues
have been fixed so this should be ready for merging.

I will work on the tx-alignment and the other bits of usbnet changes
and produce at least two more patch series for this later.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:16:20 -08:00
Ben Dooks
75938f7710 usbnet: smsc95xx: check for csum being in last four bytes
The manual states that the checksum cannot lie in the last DWORD of the
transmission, so add a basic check for this and fall back to software
checksumming the packet.

This only seems to trigger for ACK packets with no options or data to
return to the other end, and the use of the tx-alignment option makes
it more likely to happen.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:16:19 -08:00
Ben Dooks
6809d2167c usbnet: smsc95xx: fix memcpy for accessing rx-data
Change the RX code to use get_unaligned_le32() instead of the combo
of memcpy and cpu_to_le32s(&var).

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:16:19 -08:00
Ben Dooks
0c8b26556c usbnet: smsc95xx: simplify tx_fixup code
The smsc95xx_tx_fixup is doing multiple calls to skb_push() to
put an 8-byte command header onto the packet. It would be easier
to do one skb_push() and then copy the data in once the push is
done.

We also make the code smaller by using proper unaligned puts for
the header. This merges in the CPU to LE32 conversion as well and
makes the whole sequence easier to understand hopefully.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:16:19 -08:00
Ben Dooks
810eeb1f41 usbnet: smsc95xx: fix rx packet alignment
The smsc95xx driver already takes into account the NET_IP_ALIGN
parameter when setting up the receive packet data, which means
we do not need to worry about aligning the packets in the usbnet
driver.

Adding the EVENT_NO_IP_ALIGN means that the IPv4 header is now
passed to the ip_rcv() routine with the start on an aligned address.

Tested on Raspberry Pi B3.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:16:19 -08:00
David S. Miller
9cd821b744 Merge branch 'dpaa2-eth-add-bql-support'
Ioana Ciocoi Radulescu says:

====================
dpaa2-eth: add bql support

The first two patches make minor tweaks to the driver to
simplify bql implementation. The third patch adds the actual
bql support.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:12:31 -08:00
Ioana Ciocoi Radulescu
569dac6a5a dpaa2-eth: bql support
Add support for byte queue limit.

On NAPI poll, we save the total number of Tx confirmed frames/bytes
and register them with bql at the end of the poll function.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:12:31 -08:00
Ioana Ciocoi Radulescu
dbcdf72898 dpaa2-eth: Update callback signature
Change the frame consume callback signature:
* the entire FQ structure is passed to the callback instead
of just the queue index
* the NAPI structure can be easily obtained from the channel
it is associated to, so we don't need to pass it explicitly

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:12:31 -08:00
Ioana Ciocoi Radulescu
b0e4f37b01 dpaa2-eth: Don't use multiple queues per channel
The DPNI object on which we build a network interface has a
certain number of {Rx, Tx, Tx confirmation} frame queues as
resources. The default hardware setup offers one queue of each
type, as well as one DPCON channel, for each core available
in the system.

There are however cases where the number of queues is greater
than the number of cores or channels. Until now, we configured
and used all the frame queues associated with a DPNI, even if it
meant assigning multiple queues of one type to the same channel.

Update the driver to only use a number of queues equal to the
number of channels, ensuring each channel will contain exactly
one Rx and one Tx confirmation queue.

>From the user viewpoint, this change is completely transparent.
Performance wise there is no impact in most scenarios. In case
the number of queues is larger than and not a multiple of the
number of channels, Rx hash distribution offers now better load
balancing between cores, which can have a positive impact on
overall system performance.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 20:12:31 -08:00
Jiri Pirko
32764c66fa net: 8021q: move vlan offload registrations into vlan_core
Currently, the vlan packet offloads are registered only upon 8021q module
load. However, even without this module loaded, the offloads could be
utilized, for example by openvswitch datapath. As reported by Michael,
that causes 2x to 5x performance improvement, depending on a testcase.

So move the vlan offload registrations into vlan_core and make this
available even without 8021q module loaded.

Reported-by: Michael Shteinbok <michaelsh86@gmail.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Tested-by: Michael Shteinbok <michaelsh86@gmail.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:51:08 -08:00
Colin Ian King
99310e732a net/decnet: add missing indentation
There is a missing indentation before the declaration of port. Add
it.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:42:49 -08:00
Colin Ian King
790cd1a8f0 net: hns3: fix spelling mistake "failded" -> "failed"
Trivial fix, the spelling of "failded" is incorrect in dev_err and
dev_warn messages. Fix this.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:34:50 -08:00
Cong Wang
7f600f14df net: remove unused skb_send_sock()
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:32:33 -08:00
Heiner Kallweit
a21ff3c83b net: phy: check for implementation of both callbacks in phy_drv_supports_irq
Now that the icplus driver has been fixed all PHY drivers supporting
interrupts have both callbacks (config_intr and ack_interrupt)
implemented - as it should be. Therefore phy_drv_supports_irq()
can be changed now to check for both callbacks being implemented.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:31:06 -08:00
David S. Miller
6551971ea8 Merge branch 'Remove-VLAN-CFI-overload'
Michał Mirosław says:

====================
Remove VLAN.CFI overload

Fix BPF code/JITs to allow for separate VLAN_PRESENT flag
storage and finally move the flag to separate storage in skbuff.

This is final step to make CLAN.CFI transparent to core Linux
networking stack.

An #ifdef is introduced temporarily to mark fragments masking
VLAN_TAG_PRESENT. This is removed altogether in the final patch.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:29 -08:00
Michał Mirosław
0c4b2d3705 net: remove VLAN_TAG_PRESENT
Replace VLAN_TAG_PRESENT with single bit flag and free up
VLAN.CFI overload. Now VLAN.CFI is visible in networking stack
and can be passed around intact.

Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:29 -08:00
Michał Mirosław
4b50d23179 net/bpf_jit: SPARC: split VLAN_PRESENT bit handling from VLAN_TCI
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:28 -08:00
Michał Mirosław
3955dec537 net/bpf_jit: MIPS: split VLAN_PRESENT bit handling from VLAN_TCI
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:28 -08:00
Michał Mirosław
4ef3a142d8 net/bpf_jit: PPC: split VLAN_PRESENT bit handling from VLAN_TCI
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:28 -08:00
Michał Mirosław
9c21225597 net/bpf: split VLAN_PRESENT bit handling from VLAN_TCI
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:28 -08:00
Michał Mirosław
5109f9fd6a net/skbuff: add macros for VLAN_PRESENT bit
Wrap VLAN_PRESENT bit using macro like PKT_TYPE_* and CLONED_*,
as used by BPF code.

Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-16 19:25:28 -08:00
David S. Miller
5aa25c05be This feature/cleanup patchset includes the following patches:
- Bump version strings, by Simon Wunderlich
 
  - Fixup includes, by Sven Eckelmann (3 patches)
 
  - Separate BATMAN_ADV_DEBUG from DEBUGFS, by Sven Eckelmann
 
  - Fixup tracing log documentation, by Sven Eckelmann
 
  - Use exclusive locks to secure netlink information dump transfers,
    by Sven Eckelmann (8 patches)
 
  - Move CRC16 dependency, by Sven Eckelmann
 
  - Enable MCAST by default, by Linus Luessing
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEE1ilQI7G+y+fdhnrfoSvjmEKSnqEFAlvsK8YWHHN3QHNpbW9u
 d3VuZGVybGljaC5kZQAKCRChK+OYQpKeoUr9EACZvVvgqVQC/kiAWyyXuTUvTy7q
 SYc0nwmHeG5L/+ekHMsfs8DZ/wofEo6LcZDrSRv79bLjCOLbvSMThVLRH2p1r2sG
 OLHqTEbofuqc79C0gXKcJwiuauomjKNty7NKbrf0g7SYhmgRFXJyDamjXt6+4kAS
 HVSPsyFt3hI7wo3VIzm4pxXsrjV3wtKAN4RdkwE0i0NCSvJpFuCPMOi53tabjokR
 aOb04vLK/SVg426PNS+0iD7oqP5WYKyZSDFD9HHCRj1AHTCxR+7E25nRYKS5J+t4
 gCn6Q9sfrJWO2k816xBl2PysA/kVT3GChs4y14LMCaLDyH0Ny4XFeR5pgjpc62fD
 JZe/rQwAQQ9IbN1dO9GTww88vMvELcSJhSP2W4q82qsHdn/h/ghVaLKl+zSUO4oS
 OByG6BJk0Dz3KpMcCcHRL+VXGUSVmRuOCP2LqM+c0aK9s56qhJM/aR/g7FePq4lQ
 HhOCCRP/bmx7F75OZRdxwOQbupQ7P1AA/P2dwjs1xzZ/BHEdmHipsmWs/z2/tKWn
 +A9dvLqiF6Dy7VgFUgp7PSi0QyDrgFHNvE14o4ako7QD2o9NqgPsdSlAK68JD0o9
 CR14Tb23mNRulWP0GZXjS/MbHmNT7tY+sAc4tSj2VO++ozd5Qox5Y1qPPnL1K8vL
 yLjA9NtJzrT36rCPpA==
 =9x4D
 -----END PGP SIGNATURE-----

Merge tag 'batadv-next-for-davem-20181114' of git://git.open-mesh.org/linux-merge

Simon Wunderlich says:

====================
This feature/cleanup patchset includes the following patches:

 - Bump version strings, by Simon Wunderlich

 - Fixup includes, by Sven Eckelmann (3 patches)

 - Separate BATMAN_ADV_DEBUG from DEBUGFS, by Sven Eckelmann

 - Fixup tracing log documentation, by Sven Eckelmann

 - Use exclusive locks to secure netlink information dump transfers,
   by Sven Eckelmann (8 patches)

 - Move CRC16 dependency, by Sven Eckelmann

 - Enable MCAST by default, by Linus Luessing
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-15 16:12:19 -08:00
Li RongQing
45cf7959c3 net: slightly optimize eth_type_trans
netperf udp stream shows that eth_type_trans takes certain cpu,
so adjust the mac address check order, and firstly check if it
is device address, and only check if it is multicast address
only if not the device address.

After this change:
To unicast, and skb dst mac is device mac, this is most of time
reduce a comparision
To unicast, and skb dst mac is not device mac, nothing change
To multicast, increase a comparision

Before:
1.03%  [kernel]          [k] eth_type_trans

After:
0.78%  [kernel]          [k] eth_type_trans

Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-15 15:10:59 -08:00
Li RongQing
982c17b9e3 net: remove BUG_ON from __pskb_pull_tail
if list is NULL pointer, and the following access of list
will trigger panic, which is same as BUG_ON

Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-15 15:07:50 -08:00
David S. Miller
7e18750cda Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2018-11-14

This series contains updates to i40e and virtchnl.

Lance Roy updates i40e to use lockdep_assert_held() instead of
spin_is_locked(), since it is better suited to check locking
requirements.

Jan improves the code readability in XDP by adding the use of a local
variable.  Provides protection on methods that create/modify/destroy
VF's via locking mechanism to prevent unstable behaviour and potential
kernel panics.

Krzysztof adds a hardware capability flag to indicate whether firmware
supports stopping the LLDP agent.

Patryk replaces the use of strncpy() with strlcpy() to ensure the buffer
is NULL terminated.

Mitch fixes the issue of trying to start nway on devices that do not
support auto-negotiation, by checking the autoneg state before
attempting to restart nway.

Alice updates virtchnl to keep the checks all together for ease of
readability and consistency.  Also fixed a "off by one" error in the
number of traffic classes being calculated.

Richard fixed VF port VLANs, where the priority bits were incorrectly
set because the incorrect shift and mask bits were being used.

Alan adds a bit to set and check if a timeout recovery is already
pending to prevent overlapping transmit timeout recovery.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-15 15:05:11 -08:00