Commit Graph

796910 Commits

Author SHA1 Message Date
Yafang Shao
5e13a0d3f5 tcp: minor optimization in tcp ack fast path processing
Bitwise operation is a little faster.
So I replace after() with using the flag FLAG_SND_UNA_ADVANCED as it is
already set before.

In addtion, there's another similar improvement in tcp_cwnd_reduction().

Cc: Joe Perches <joe@perches.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:24:18 -08:00
David S. Miller
0cf3a68a53 Merge branch 'mv88e6xxx-Support-more-SERDES-interfacxes'
Andrew Lunn says:

====================
net: dsa: mv88e6xxx: Support more SERDES interfacxes

Currently the SERDES interfaces for ports 9 and 10 on the mv88e6390x
are supported, allowing upto 10G. However, when unused, these SERDES
interfaces can be used by some of the lower ports for 1000Base-X.

The tricky bit here is ordering. The SERDES have to become free from
ports 9 or 10 before they can be used with lower ports. Normally, this
would happen only when these ports would be configured up, which is
too late. So at probe time, defaulting ports 9 and 10 to 1000BaseX
frees them for use with lower ports. If they are actually needed, they
will be taken back when port 9 and 10 goes up.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:17:46 -08:00
Andrew Lunn
2defda1f4b net: dsa: mv88e6xxx: Add support for SERDES on ports 2-8 for 6390X
The 6390X family has 8 SERDES interfaces. When ports 9 and 10 are not
using all their SERDES interfaces, the unused ones can be assigned to
ports 2-8. Add support for interrupts from SERDES interfaces connected
to these lower ports.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:17:46 -08:00
Andrew Lunn
787799a9d5 net: dsa: mv88e6xxx: Default ports 9/10 6390X CMODE to 1000BaseX
The 6390X family has 8 SERDES interfaces. This allows ports 9 and 10
to support up to 10Gbps using 4 SERDES interfaces. However, when lower
speeds are used, which need fewer SERDES interfaces, the unused SERDES
interfaces can be used by ports 2-8.

The hardware defaults to ports 9 and 10 having all 4 SERDES interfaces
assigned to them. This only gets changed when the interface is
configured after what the SFP supports has been determined, or the 10G
PHY completes auto-neg.

For hardware designs which limit ports 9 and 10 to one or two SERDES
interfaces, and place SFPs on the lower interfaces, this is too
late. Those ports with SFP should not wait until ports 9/10 are up in
order to get access to the SERDES interface. So change the default
configuration when the driver is initialised. Configure ports 9 and 10
to 1000BaseX, so they use a single SERDES interface, freeing up the
others. They can steal them back if they need them.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:17:46 -08:00
Andrew Lunn
fdc71eea8c net: dsa: mv88e6xxx: Differentiate between 6390 and 6390X cmodes
The X family variants support additional ports modes, for 10G
operation, which the non-X variants don't have. Add a port_set_cmode()
for non-X variants to enforce this.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:17:46 -08:00
Andrew Lunn
b3dce4da5b net: dsa: mv88e6xxx: Group cmode ops together
Move .port_set_cmode next to .port_get_cmode.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:17:45 -08:00
David S. Miller
8d2681f5ce Merge branch 'net-phy-convert-advertise-and-supported-to-linkmode'
Andrew Lunn says:

====================
net: phy: convert advertise and supported to linkmode

This is the last part in converting phylib to make use of a linux
bitmap, not a u32, to represent links modes. This will allow support
for PHYs > 1Gbps, which need to use link modes represented by a bit >
32.

A number of MAC and PHY drivers need changes to support this. However
the previous two patchesets reduced the number somewhat, the helpers
which were introduced have been modified instead of the actual
drivers.

The follow on patches then make use of the extra bits, adding support
for more link modes.

Given how invasive this change is, i expect the build is broken for
some architectures i did not test. I will fixup the breakage as fast
as i can.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:02 -08:00
Andrew Lunn
cb6402fe26 net: phy: Add support for resolving 5G and 2.5G autoneg
Now that 2.5G and 5G can be represented in phydev->advertising and
phydev->lp_advertising, add these two links modes as possible
resolutions to auto negotiation.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:02 -08:00
Andrew Lunn
3c6b59d6f0 net: phy: Add more link modes to the settings table
Now that PHYs and MAC can support more than 32 bit masks, add link
modes which are > 31 to the PHY settings table.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:01 -08:00
Andrew Lunn
fe1919147c net: phy: Fixup kerneldoc markup.
Add missing markup for function parameters

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:01 -08:00
Andrew Lunn
c0ec3c2736 net: phy: Convert u32 phydev->lp_advertising to linkmode
Convert phy drivers to report the link partner advertised modes using
a linkmode bitmap. This allows them to report the higher speeds which
don't fit in a u32.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:01 -08:00
Andrew Lunn
3c1bcc8614 net: ethernet: Convert phydev advertize and supported from u32 to link mode
There are a few MAC/PHYs combinations which now support > 1Gbps. These
may need to make use of link modes with bits > 31. Thus their
supported PHY features or advertised features cannot be implemented
using the current bitmap in a u32. Convert to using a linkmode bitmap,
which can support all the currently devices link modes, and is future
proof as more modes are added.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:10:01 -08:00
Heiner Kallweit
899a3cbbf7 net: phy: remove states PHY_STARTING and PHY_PENDING
Both states aren't used. Most likely they result from an idea that
never materialized. So remove them.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 10:05:26 -08:00
yupeng
b08794a922 documentation of some IP/ICMP snmp counters
The snmp_counter.rst explains the meanings of snmp counters. It also
provides a set of experiments (only 1 for this initial patch),
combines the experiments' resutls and the snmp counters'
meanings. This is an initial path, only explains a part of IP/ICMP
counters and provide a simple ping test.

Signed-off-by: yupeng <yupeng0921@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:59:02 -08:00
LUU Duc Canh
31c4f4cc32 tipc: improve broadcast retransmission algorithm
Currently, the broadcast retransmission algorithm is using the
'prev_retr' field in struct tipc_link to time stamp the latest broadcast
retransmission occasion. This helps to restrict retransmission of
individual broadcast packets to max once per 10 milliseconds, even
though all other criteria for retransmission are met.

We now move this time stamp to the control block of each individual
packet, and remove other limiting criteria. This simplifies the
retransmission algorithm, and eliminates any risk of logical errors
in selecting which packets can be retransmitted.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: LUU Duc Canh <canh.d.luu@dektech.com.au>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:57:46 -08:00
David S. Miller
bb5e6a8290 Merge branch 'net-sched-indirect-tc-block-cb-registration'
Jakub Kicinski says:

====================
net: sched: indirect tc block cb registration

John says:

This patchset introduces an alternative to egdev offload by allowing a
driver to register for block updates when an external device (e.g. tunnel
netdev) is bound to a TC block. Drivers can track new netdevs or register
to existing ones to receive information on such events. Based on this,
they may register for block offload rules using already existing
functions.

The patchset also implements this new indirect block registration in the
NFP driver to allow the offloading of tunnel rules. The use of egdev
offload (which is currently only used for tunnel offload) is subsequently
removed.

RFC v2 -> PATCH
 - removed embedded tracking function from indir block register (now up to
   driver to clean up after itself)
 - refactored NFP code due to recent submissions
 - removed priv list clean function in NFP (list should be cleared by
   indirect block unregisters)

RFC v1->v2:
 - free allocated owner struct in block_owner_clean function
 - add geneve type helper function
 - move test stub in NFP (v1 patch 2) to full tunnel offload
   implementation via indirect blocks (v2 patches 3-8)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
d4b69bad61 nfp: flower: remove unnecessary code in flow lookup
Recent changes to NFP mean that stats updates from fw to driver no longer
require a flow lookup and (because egdev offload has been removed) the
ingress netdev for a lookup is now always known.

Remove obsolete code in a flow lookup that matches on host context and
that allows for a netdev to be NULL.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
4f63fde3fc nfp: flower: remove TC egdev offloads
Previously, only tunnel decap rules required egdev registration for
offload in NFP. These are now supported via indirect TC block callbacks.

Remove the egdev code from NFP.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
3166dd07a9 nfp: flower: offload tunnel decap rules via indirect TC blocks
Previously, TC block tunnel decap rules were only offloaded when a
callback was triggered through registration of the rules egress device.
This meant that the driver had no access to the ingress netdev and so
could not verify it was the same tunnel type that the rule implied.

Register tunnel devices for indirect TC block offloads in NFP, giving
access to new rules based on the ingress device rather than egress. Use
this to verify the netdev type of VXLAN and Geneve based rules and offload
the rules to HW if applicable.

Tunnel registration is done via a netdev notifier. On notifier
registration, this is triggered for already existing netdevs. This means
that NFP can register for offloads from devices that exist before it is
loaded (filter rules will be replayed from the TC core). Similarly, on
notifier unregister, a call is triggered for each currently active netdev.
This allows the driver to unregister any indirect block callbacks that may
still be active.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
65b7970edf nfp: flower: increase scope of netdev checking functions
Both the actions and tunnel_conf files contain local functions that check
the type of an input netdev. In preparation for re-use with tunnel offload
via indirect blocks, move these to static inline functions in a header
file.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
7885b4fc8d nfp: flower: allow non repr netdev offload
Previously the offload functions in NFP assumed that the ingress (or
egress) netdev passed to them was an nfp repr.

Modify the driver to permit the passing of non repr netdevs as the ingress
device for an offload rule candidate. This may include devices such as
tunnels. The driver should then base its offload decision on a combination
of ingress device and egress port for a rule.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:53 -08:00
John Hurley
7f76fa3675 net: sched: register callbacks for indirect tc block binds
Currently drivers can register to receive TC block bind/unbind callbacks
by implementing the setup_tc ndo in any of their given netdevs. However,
drivers may also be interested in binds to higher level devices (e.g.
tunnel drivers) to potentially offload filters applied to them.

Introduce indirect block devs which allows drivers to register callbacks
for block binds on other devices. The callback is triggered when the
device is bound to a block, allowing the driver to register for rules
applied to that block using already available functions.

Freeing an indirect block callback will trigger an unbind event (if
necessary) to direct the driver to remove any offloaded rules and unreg
any block rule callbacks. It is the responsibility of the implementing
driver to clean any registered indirect block callbacks before exiting,
if the block it still active at such a time.

Allow registering an indirect block dev callback for a device that is
already bound to a block. In this case (if it is an ingress block),
register and also trigger the callback meaning that any already installed
rules can be replayed to the calling driver.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:54:52 -08:00
David S. Miller
d1ce01144e Merge branch 'PHYID-matching-macros'
Heiner Kallweit says:

====================
net: phy: add macros for PHYID matching in PHY driver config

Add macros for PHYID matching to be used in PHY driver configs.
By using these macros some boilerplate code can be avoided.

Use them initially in the Realtek PHY drivers.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:44:14 -08:00
Heiner Kallweit
ca49493633 net: phy: realtek: use new PHYID matching macros
Use new macros for PHYID matching to avoid boilerplate code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:44:14 -08:00
Heiner Kallweit
aa2af2eb44 net: phy: add macros for PHYID matching
Add macros for PHYID matching to be used in PHY driver configs.
By using these macros some boilerplate code can be avoided.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:44:14 -08:00
David S. Miller
fa28a2b244 Merge branch 'phylib-simplifications'
Heiner Kallweit says:

====================
net: phy: further phylib simplifications after recent changes to the state machine

After the recent changes to the state machine phylib can be further
simplified (w/o having to make any assumptions).
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:41:32 -08:00
Heiner Kallweit
34d884e3da net: phy: improve and inline phy_change
Now that phy_mac_interrupt() doesn't call phy_change() any longer it's
called from phy_interrupt() only. Therefore phy_interrupt_is_valid()
returns true always and the check can be removed.
In case of PHY_HALTED phy_interrupt() bails out immediately,
therefore the second check for PHY_HALTED including the call to
phy_disable_interrupts() can be removed.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:41:32 -08:00
Heiner Kallweit
d73a2156bd net: phy: simplify phy_mac_interrupt and related functions
When using phy_mac_interrupt() the irq number is set to
PHY_IGNORE_INTERRUPT, therefore phy_interrupt_is_valid() returns false.
As a result phy_change() effectively just calls phy_trigger_machine()
when called from phy_mac_interrupt() via phy_change_work(). So we can
call phy_trigger_machine() from phy_mac_interrupt() directly and
remove some now unneeded code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:41:32 -08:00
Heiner Kallweit
8deeb6309c net: phy: don't set state PHY_CHANGELINK in phy_change
State PHY_CHANGELINK isn't needed here, we can call the state machine
directly. We just have to remove the check for phy_polling_mode() to
make this work also in interrupt mode. Removing this check doesn't
cause any overhead because when not polling the state machine is
called only if required by some event.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:41:32 -08:00
David S. Miller
d79e26a7ef Merge branch 'remove-PHY_HAS_INTERRUPT'
Heiner Kallweit says:

====================
net: phy: replace PHY_HAS_INTERRUPT with a check for config_intr and ack_interrupt

Flag PHY_HAS_INTERRUPT is used only here for this small check. I think
using interrupts isn't possible if a driver defines neither
config_intr nor ack_interrupts callback. So we can replace checking
flag PHY_HAS_INTERRUPT with checking for these callbacks.
This allows to remove this flag from all driver configs.

v2:
- add helper for check in patch 1
- remove PHY_HAS_INTERRUPT from all drivers, not only Realtek
- remove flag PHY_HAS_INTERRUPT completely

v3:
- rebase patch 2
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:37:04 -08:00
Heiner Kallweit
a4307c0ec6 net: phy: remove flag PHY_HAS_INTERRUPT from driver configs
Now that flag PHY_HAS_INTERRUPT has been replaced with a check for
callbacks config_intr and ack_interrupt, we can remove setting this
flag from all driver configs.
Last but not least remove flag PHY_HAS_INTERRUPT completely.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:36:56 -08:00
Heiner Kallweit
0d2e778e38 net: phy: replace PHY_HAS_INTERRUPT with a check for config_intr and ack_interrupt
Flag PHY_HAS_INTERRUPT is used only here for this small check. I think
using interrupts isn't possible if a driver defines neither
config_intr nor ack_interrupts callback. So we can replace checking
flag PHY_HAS_INTERRUPT with checking for these callbacks.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-11 09:36:46 -08:00
David S. Miller
e15e067d06 sctp: Fix SKB list traversal in sctp_intl_store_ordered().
Same change as made to sctp_intl_store_reasm().

To be fully correct, an iterator has an undefined value when something
like skb_queue_walk() naturally terminates.

This will actually matter when SKB queues are converted over to
list_head.

Formalize what this code ends up doing with the current
implementation.

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-10 19:32:23 -08:00
David S. Miller
348bbc25c4 sctp: Fix SKB list traversal in sctp_intl_store_reasm().
To be fully correct, an iterator has an undefined value when something
like skb_queue_walk() naturally terminates.

This will actually matter when SKB queues are converted over to
list_head.

Formalize what this code ends up doing with the current
implementation.

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-10 19:28:27 -08:00
David S. Miller
9e733177c7 iucv: Remove SKB list assumptions.
Eliminate the assumption that SKBs and SKB list heads can
be cast to eachother in SKB list handling code.

This change also appears to fix a bug since the list->next pointer is
sampled outside of holding the SKB queue lock.

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-10 16:55:11 -08:00
David S. Miller
4a5a553dde brcmfmac: Use standard SKB list accessors in brcmf_sdiod_sglist_rw.
Instead of direct SKB list pointer accesses.

The loops in this function had to be rewritten to accommodate this
more easily.

The first loop iterates now over the target list in the outer loop,
and triggers an mmc data operation when the per-operation limits are
hit.

Then after the loops, if we have any residue, we trigger the last
and final operation.

For the page aligned workaround, where we have to copy the read data
back into the original list of SKBs, we use a two-tiered loop.  The
outer loop stays the same and iterates over pktlist, and then we have
an inner loop which uses skb_peek_next().  The break logic has been
simplified because we know that the aggregate length of the SKBs in
the source and destination lists are the same.

This change also ends up fixing a bug, having to do with the
maintainance of the seg_sz variable and how it drove the outermost
loop.  It begins as:

	seg_sz = target_list->qlen;

ie. the number of packets in the target_list queue.  The loop
structure was then:

	while (seq_sz) {
		...
		while (not at end of target_list) {
			...
			sg_cnt++
			...
		}
		...
		seg_sz -= sg_cnt;

The assumption built into that last statement is that sg_cnt counts
how many packets from target_list have been fully processed by the
inner loop.  But this not true.

If we hit one of the limits, such as the max segment size or the max
request size, we will break and copy a partial packet then contine
back up to the top of the outermost loop.

With the new loops we don't have this problem as we don't guard the
loop exit with a packet count, but instead use the progression of the
pkt_next SKB through the list to the end.  The general structure is:

	sg_cnt = 0;
	skb_queue_walk(target_list, pkt_next) {
		pkt_offset = 0;
		...
		sg_cnt++;
		...
		while (pkt_offset < pkt_next->len) {
			pkt_offset += sg_data_size;
			if (queued up max per request)
				mmc_submit_one();
		}
	}
	if (sg_cnt)
		mmc_submit_one();

The variables that maintain where we are in the MMC command state such
as req_sz, sg_cnt, and sgl are reset when we emit one of these full
sized requests.

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-10 16:31:15 -08:00
Michał Mirosław
6083e28aa0 OVS: remove VLAN_TAG_PRESENT - fixup
It turns out I missed one VLAN_TAG_PRESENT in OVS code while rebasing.
This fixes it.

Fixes: 9df46aefaf ("OVS: remove use of VLAN_TAG_PRESENT")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-10 13:42:16 -08:00
David S. Miller
12ceaf8864 infiniband: nes: Fix more direct skb list accesses.
The following:

	skb = skb->next;
	...
	if (skb == (struct sk_buff *)queue)

is transformed into:

	skb = skb_peek_next(skb, queue);
	...
	if (!skb)

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 21:19:44 -08:00
Kyle Roeschley
457937bd2e net: phy: leds: Don't make our own link speed names
The phy core provides a handy phy_speed_to_str() helper, so use that
instead of doing our own formatting of the different known link speeds.
To do this, increase PHY_LED_TRIGGER_SPEED_SUFFIX_SIZE to 11 so we can fit
'Unsupported' if necessary.

Signed-off-by: Kyle Roeschley <kyle.roeschley@ni.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:13:51 -08:00
Heiner Kallweit
695bce8fd8 net: phy: improve struct phy_device member interrupts handling
As a heritage from the very early days of phylib member interrupts is
defined as u32 even though it's just a flag whether interrupts are
enabled. So we can change it to a bitfield member. In addition change
the code dealing with this member in a way that it's clear we're
dealing with a bool value.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:11:56 -08:00
David S. Miller
a4bec00b28 Merge branch 'dpaa2-eth-defer-probe-on-object-allocate'
Ioana Ciornei says:

====================
dpaa2-eth: defer probe on object allocate

Allocatable objects on the fsl-mc bus may be probed by the fsl_mc_allocator
after the first attempts of other drivers to use them. Defer the probe when
this situation happens.

Changes in v2:
  - proper handling of IS_ERR_OR_NULL
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:08:59 -08:00
Ioana Ciornei
5500598abb dpaa2-ptp: defer probe when portal allocation failed
The fsl_mc_portal_allocate can fail when the requested MC portals are
not yet probed by the fsl_mc_allocator. In this situation, the driver
should defer the probe.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:08:58 -08:00
Ioana Ciornei
d7f5a9d89a dpaa2-eth: defer probe on object allocate
The fsl_mc_object_allocate function can fail because not all allocatable
objects are probed by the fsl_mc_allocator at the call time. Defer the
dpaa2-eth probe when this happens.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:08:58 -08:00
Paolo Abeni
029a374348 udp6: cleanup stats accounting in recvmsg()
In the udp6 code path, we needed multiple tests to select the correct
mib to be updated. Since we touch at least a counter at each iteration,
it's convenient to use the recently introduced __UDPX_MIB() helper once
and remove some code duplication.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 20:07:05 -08:00
Jakub Kicinski
560f1ba4d8 nfp: use the new __netdev_tx_sent_queue() BQL optimisation
__netdev_tx_sent_queue() was added in commit e59020abf0f
("net: bql: add __netdev_tx_sent_queue()") and allows for
better GSO performance.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:49:00 -08:00
David S. Miller
3f2bba7d68 Merge branch 'ptp-more-accurate-PHC-system-clock-synchronization'
Miroslav Lichvar says:

====================
More accurate PHC<->system clock synchronization

RFC->v1:
- added new patches
- separated PHC timestamp from ptp_system_timestamp
- fixed memory leak in PTP_SYS_OFFSET_EXTENDED
- changed PTP_SYS_OFFSET_EXTENDED to work with array of arrays
- fixed PTP_SYS_OFFSET_EXTENDED to break correctly from loop
- fixed timecounter updates in drivers
- split gettimex in igb driver
- fixed ptp_read_* functions to be available without
  CONFIG_PTP_1588_CLOCK

This series enables a more accurate synchronization between PTP hardware
clocks and the system clock.

The first two patches are minor cleanup/bug fixes.

The third patch adds an extended version of the PTP_SYS_OFFSET ioctl,
which returns three timestamps for each measurement. The idea is to
shorten the interval between the system timestamps to contain just the
reading of the lowest register of the PHC in order to reduce the error
in the measured offset and get a smaller upper bound on the maximum
error.

The fourth patch deprecates the original gettime function.

The remaining patches update the gettime function in order to support
the new ioctl in the e1000e, igb, ixgbe, and tg3 drivers.

Tests with few different NICs in different machines show that:
- with an I219 (e1000e) the measured delay was reduced from 2500 to 1300
  ns and the error in the measured offset, when compared to the cross
  timestamping supported by the driver, was reduced by a factor of 5
- with an I210 (igb) the delay was reduced from 5100 to 1700 ns
- with an I350 (igb) the delay was reduced from 2300 to 750 ns
- with an X550 (ixgbe) the delay was reduced from 1950 to 650 ns
- with a BCM5720 (tg3) the delay was reduced from 2400 to 1200 ns
====================

Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:44:21 -08:00
Miroslav Lichvar
6fe42e228d tg3: extend PTP gettime function to read system clock
This adds support for the PTP_SYS_OFFSET_EXTENDED ioctl.

Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:43:51 -08:00
Miroslav Lichvar
018ed23ddc ixgbe: extend PTP gettime function to read system clock
This adds support for the PTP_SYS_OFFSET_EXTENDED ioctl.

Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:43:51 -08:00
Miroslav Lichvar
cff8ba28db igb: extend PTP gettime function to read system clock
This adds support for the PTP_SYS_OFFSET_EXTENDED ioctl.

Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:43:51 -08:00
Miroslav Lichvar
98942d7053 e1000e: extend PTP gettime function to read system clock
This adds support for the PTP_SYS_OFFSET_EXTENDED ioctl.

Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09 19:43:51 -08:00