Commit Graph

722504 Commits

Author SHA1 Message Date
Grygorii Strashko
2733d7b89c net: ethernet: ti: cpsw: move mac_hi/lo defines in cpsw.h
Move mac_hi/lo defines in common header cpsw.h and re-use
them for netcp_ethss.c.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 16:36:32 -05:00
Grygorii Strashko
2c8a14d626 net: ethernet: ti: cpsw: move platform data struct to .c file
CPSW platform data struct cpsw_platform_data and struct cpsw_slave_data are
used only incide cpsw.c module, so move these definitions there.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 16:36:32 -05:00
Grygorii Strashko
dda5f5fe74 net: ethernet: ti: cpsw: use proper io apis
Switch to use writel_relaxed/readl_relaxed() IO API instead of raw version
as it is recommended.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 16:36:32 -05:00
Grygorii Strashko
fc49be85f6 net: ethernet: ti: cpsw: drop unused var poll from cpsw_update_channels_res
Drop unused variable "poll" from cpsw_update_channels_res().

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 16:36:32 -05:00
Palmer Dabbelt
3b62de26cf
RISC-V: Fixes for clean allmodconfig build
Olaf said: Here's a short series of patches that produces a working
allmodconfig. Would be nice to see them go in so we can add build
coverage.

I've dropped patches 8 and 10 from the original set:

* [PATCH 08/10] (RISC-V: Set __ARCH_WANT_RENAMEAT to pick up generic
  version) has a better fix that I've sent out for review, we don't want
  renameat.
* [PATCH 10/10] (input: joystick: riscv has get_cycles) has already been
  taken into Dmitry Torokhov's tree.
2017-12-01 13:31:31 -08:00
Palmer Dabbelt
185e788c84
move libgcc.h to include/linux 2017-12-01 13:16:15 -08:00
Palmer Dabbelt
7382fbdeae
RISC-V: __io_writes should respect the length argument 2017-12-01 13:14:36 -08:00
Palmer Dabbelt
07f8ba7439 RISC-V: User-Visible Changes
This merge contains the user-visible, ABI-breaking changes that we want
to make sure we have in Linux before our first release.   Highlights
include:

* VDSO entries for clock_get/gettimeofday/getcpu have been added.  These
  are simple syscalls now, but we want to let glibc use them from the
  start so we can make them faster later.
* A VDSO entry for instruction cache flushing has been added so
  userspace can flush the instruction cache.
* The VDSO symbol versions for __vdso_cmpxchg{32,64} have been removed,
  as those VDSO entries don't actually exist.

Conflicts:
        arch/riscv/include/asm/tlbflush.h
2017-12-01 13:12:10 -08:00
Palmer Dabbelt
f8182f613c
RISC-V Atomic Cleanups
This patch set is the result of some feedback that filtered through
after our original patch set was reviewed, some of which was the result
of me missing some email.  It contains:

* A new READ_ONCE in arch_spin_is_locked()
* __test_and_op_bit_ord() is now actually ordered
* Improvements to various comments
* Removal of some dead code
2017-12-01 13:10:42 -08:00
Palmer Dabbelt
da894ff100 RISC-V: __io_writes should respect the length argument
Whoops -- I must have just been being an idiot again.  Thanks to Segher
for finding the bug :).

CC: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2017-12-01 13:09:57 -08:00
Christoph Hellwig
4db2b604c0 move libgcc.h to include/linux
Introducing a new include/lib directory just for this file totally
messes up tab completion for include/linux, which is highly annoying.

Move it to include/linux where we have headers for all kinds of other
lib/ code as well.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2017-12-01 13:09:40 -08:00
Heiner Kallweit
80274abafc net: phy: remove generic settings for callbacks config_aneg and read_status from drivers
Remove generic settings for callbacks config_aneg and read_status
from drivers.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:42:21 -05:00
Heiner Kallweit
00fde79532 net: phy: core: use genphy version of callbacks read_status and config_aneg per default
read_status and config_aneg are the only mandatory callbacks and most
of the time the generic implementation is used by drivers.
So make the core fall back to the generic version if a driver doesn't
implement the respective callback.

Also currently the core doesn't seem to verify that drivers implement
the mandatory calls. If a driver doesn't do so we'd just get a NPE.
With this patch this potential issue doesn't exit any longer.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:42:21 -05:00
David S. Miller
efbae71643 Merge branch 'ip6_gre-add-erspan-native-tunnel-for-ipv6'
William Tu says:

====================
ip6_gre: add erspan native tunnel for ipv6

The patch series add support for ERSPAN tunnel over ipv6.  The first patch
refectors the existing ipv4 gre implementation and the second refactors the
ipv6 gre's xmit code.  Finally the last patch introduces erspan protocol.

change in v5:
  - add cover-letter description

change in v4:
  - rebase on top of net-next
  - use log_ecn_error in ip6_tnl_rcv

change in v3:
  - add inline for functions in header
  - rebase on top of net-next

change in v2:
  - remove inline
  - fix some indent
  - fix errors reports by clang and scan-build
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:33:27 -05:00
William Tu
5a963eb61b ip6_gre: Add ERSPAN native tunnel support
The patch adds support for ERSPAN tunnel over ipv6.

Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:33:27 -05:00
William Tu
898b29798e ip6_gre: Refactor ip6gre xmit codes
This patch refactors the ip6gre_xmit_{ipv4, ipv6}.
It is a prep work to add the ip6erspan tunnel.

Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:33:26 -05:00
William Tu
a3222dc95c ip_gre: Refector the erpsan tunnel code.
Move two erspan functions to header file, erspan.h, so ipv6
erspan implementation can use it.

Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:33:26 -05:00
David S. Miller
50e0f5c038 Merge branch 'ethtool-reset-AP'
Scott Branden says:

====================
net: ethtool: add support for ETH_RESET_AP

Add support to reset appplication processors inside SmartNICs by
defining new ETH_RESET_AP bit.

And use new ETH_RESET_AP bit in bnxt ethernet driver.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:29:40 -05:00
Scott Branden
6502ad5963 bnxt_en: Add ETH_RESET_AP support
Add ETH_RESET_AP support handling to reset the internal
Application Processor(s) of the SmartNIC card.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:29:40 -05:00
Scott Branden
40e44a1e66 net: ethtool: add support for reset of AP inside NIC interface.
Add ETH_RESET_AP to reset the application processor(s) inside the NIC
interface.

Current ETH_RESET_MGMT supports a management processor inside this NIC.
This is typically used for remote NIC management purposes.

Application processors exist inside some SmartNICs to run various
applications inside the NIC processor - be it a simple algorithm without
an OS to as complex as hosting multiple VMs.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:29:40 -05:00
David S. Miller
68bf33f413 Merge branch 'rds-tcp-netns-delete-related-fixes'
Sowmini Varadhan says:

====================
rds-tcp netns delete related fixes

Patchset contains cleanup and bug fixes. Patch 1 is the removal
of some redundant code/functions. Patch 2 and 3 are fixes for
corner cases identified by syzkaller. I've not been able to
reproduce the actual use-after-free race flagged in the syzkaller
reports, thus these fixes are based on code inspection plus
manual testing to make sure the modified code paths are executed
without problems in the commonly encountered timing cases.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:25:15 -05:00
Sowmini Varadhan
f10b4cff98 rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list
to find the rds_connection entries marked for deletion as part
of the netns deletion under the protection of the rds_tcp_conn_lock.
Since the rds_tcp_conn_list tracks rds_tcp_connections (which
have a 1:1 mapping with rds_conn_path), multiple tc entries in
the rds_tcp_conn_list will map to a single rds_connection, and will
be deleted as part of the rds_conn_destroy() operation that is
done outside the rds_tcp_conn_lock.

The rds_tcp_conn_list traversal done under the protection of
rds_tcp_conn_lock should not leave any doomed tc entries in
the list after the rds_tcp_conn_lock is released, else another
concurrently executiong netns delete (for a differnt netns) thread
may trip on these entries.

Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:25:15 -05:00
Sowmini Varadhan
681648e67d rds: tcp: correctly sequence cleanup on netns deletion.
Commit 8edc3affc0 ("rds: tcp: Take explicit refcounts on struct net")
introduces a regression in rds-tcp netns cleanup. The cleanup_net(),
(and thus rds_tcp_dev_event notification) is only called from put_net()
when all netns refcounts go to 0, but this cannot happen if the
rds_connection itself is holding a c_net ref that it expects to
release in rds_tcp_kill_sock.

Instead, the rds_tcp_kill_sock callback should make sure to
tear down state carefully, ensuring that the socket teardown
is only done after all data-structures and workqs that depend
on it are quiesced.

The original motivation for commit 8edc3affc0 ("rds: tcp: Take explicit
refcounts on struct net") was to resolve a race condition reported by
syzkaller where workqs for tx/rx/connect were triggered after the
namespace was deleted. Those worker threads should have been
cancelled/flushed before socket tear-down and indeed,
rds_conn_path_destroy() does try to sequence this by doing
     /* cancel cp_send_w */
     /* cancel cp_recv_w */
     /* flush cp_down_w */
     /* free data structures */
Here the "flush cp_down_w" will trigger rds_conn_shutdown and thus
invoke rds_tcp_conn_path_shutdown() to close the tcp socket, so that
we ought to have satisfied the requirement that "socket-close is
done after all other dependent state is quiesced". However,
rds_conn_shutdown has a bug in that it *always* triggers the reconnect
workq (and if connection is successful, we always restart tx/rx
workqs so with the right timing, we risk the race conditions reported
by syzkaller).

Netns deletion is like module teardown- no need to restart a
reconnect in this case. We can use the c_destroy_in_prog bit
to avoid restarting the reconnect.

Fixes: 8edc3affc0 ("rds: tcp: Take explicit refcounts on struct net")
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:25:15 -05:00
Sowmini Varadhan
2d746c93b6 rds: tcp: remove redundant function rds_tcp_conn_paths_destroy()
A side-effect of Commit c14b036681 ("rds: tcp: set linger to 1
when unloading a rds-tcp") is that we always send a RST on the tcp
connection for rds_conn_destroy(), so rds_tcp_conn_paths_destroy()
is not needed any more and is removed in this patch.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:25:15 -05:00
Jon Maloy
4c94cc2d3d tipc: fall back to smaller MTU if allocation of local send skb fails
When sending node local messages the code is using an 'mtu' of 66060
bytes to avoid unnecessary fragmentation. During situations of low
memory tipc_msg_build() may sometimes fail to allocate such large
buffers, resulting in unnecessary send failures. This can easily be
remedied by falling back to a smaller MTU, and then reassemble the
buffer chain as if the message were arriving from a remote node.

At the same time, we change the initial MTU setting of the broadcast
link to a lower value, so that large messages always are fragmented
into smaller buffers even when we run in single node mode. Apart from
obtaining the same advantage as for the 'fallback' solution above, this
turns out to give a significant performance improvement. This can
probably be explained with the __pskb_copy() operation performed on the
buffer for each recipient during reception. We found the optimal value
for this, considering the most relevant skb pool, to be 3744 bytes.

Acked-by: Ying Xue <ying.xue@ericsson.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:21:25 -05:00
David S. Miller
ccab371f74 Merge branch 'sfp-phylink-fixes'
Russell King says:

====================
SFP/phylink fixes

Here are four phylink fixes:
- the "options" is a big-endian value, we must test the bits taking the
  endian-ness into account.
- improve the handling of RX_LOS polarity, taking no RX_LOS polarity
  bits set to mean there is no RX_LOS functionality provided.
- do not report modules that require the address mode switching as
  supporting SFF8472.
- ensure that the mac_link_down() function is called when phylink_stop()
  is called.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:18:42 -05:00
Russell King
2012b7d6b2 phylink: ensure we take the link down when phylink_stop() is called
Ensure that we tell the MAC to take the link down when phylink_stop()
is called, and that this completes prior to phylink_stop() returns.

Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:18:42 -05:00
Russell King
ec7681bde6 sfp: warn about modules requiring address change sequence
We do not support SFP modules which require the address change sequence
as detailed by SFF 8472 revision 1.22 section 8.9.  Warn when these
modules are inserted, and treat them as SFF8079 modules for ethtool.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:18:42 -05:00
Russell King
710dfbb01a sfp: improve RX_LOS handling
There are two bits in the option word for the RX_LOS signal.  One
reports that the RX_LOS signal is active high, the other reports that
it is active low.  When both or neither are set, the result is not
well defined in the specification.

Rather than assuming that neither set means normal RX_LOS, take this
as meaning no RX_LOS signal available, thereby ignoring the signal.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:18:42 -05:00
Russell King
acf1c02f02 sfp: fix RX_LOS signal handling
The options word is a be16 quantity, so we need to test the flags
having converted the endian-ness.  Convert the flag bits to be16,
which can be optimised by the compiler, rather than converting a
variable at runtime.

Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:18:41 -05:00
Max Uvarov
a0da456bbf net: phy-micrel: check return code in flp center function
Fix obvious typo that first return value is set but not checked.

Signed-off-by: Max Uvarov <muvarov@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:17:06 -05:00
Tommi Rantala
c7799c067c tipc: call tipc_rcv() only if bearer is up in tipc_udp_recv()
Remove the second tipc_rcv() call in tipc_udp_recv(). We have just
checked that the bearer is not up, and calling tipc_rcv() with a bearer
that is not up leads to a TIPC div-by-zero crash in
tipc_node_calculate_timer(). The crash is rare in practice, but can
happen like this:

  We're enabling a bearer, but it's not yet up and fully initialized.
  At the same time we receive a discovery packet, and in tipc_udp_recv()
  we end up calling tipc_rcv() with the not-yet-initialized bearer,
  causing later the div-by-zero crash in tipc_node_calculate_timer().

Jon Maloy explains the impact of removing the second tipc_rcv() call:
  "link setup in the worst case will be delayed until the next arriving
   discovery messages, 1 sec later, and this is an acceptable delay."

As the tipc_rcv() call is removed, just leave the function via the
rcu_out label, so that we will kfree_skb().

[   12.590450] Own node address <1.1.1>, network identity 1
[   12.668088] divide error: 0000 [#1] SMP
[   12.676952] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.14.2-dirty #1
[   12.679225] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-2.fc27 04/01/2014
[   12.682095] task: ffff8c2a761edb80 task.stack: ffffa41cc0cac000
[   12.684087] RIP: 0010:tipc_node_calculate_timer.isra.12+0x45/0x60 [tipc]
[   12.686486] RSP: 0018:ffff8c2a7fc838a0 EFLAGS: 00010246
[   12.688451] RAX: 0000000000000000 RBX: ffff8c2a5b382600 RCX: 0000000000000000
[   12.691197] RDX: 0000000000000000 RSI: ffff8c2a5b382600 RDI: ffff8c2a5b382600
[   12.693945] RBP: ffff8c2a7fc838b0 R08: 0000000000000001 R09: 0000000000000001
[   12.696632] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8c2a5d8949d8
[   12.699491] R13: ffffffff95ede400 R14: 0000000000000000 R15: ffff8c2a5d894800
[   12.702338] FS:  0000000000000000(0000) GS:ffff8c2a7fc80000(0000) knlGS:0000000000000000
[   12.705099] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   12.706776] CR2: 0000000001bb9440 CR3: 00000000bd009001 CR4: 00000000003606e0
[   12.708847] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   12.711016] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   12.712627] Call Trace:
[   12.713390]  <IRQ>
[   12.714011]  tipc_node_check_dest+0x2e8/0x350 [tipc]
[   12.715286]  tipc_disc_rcv+0x14d/0x1d0 [tipc]
[   12.716370]  tipc_rcv+0x8b0/0xd40 [tipc]
[   12.717396]  ? minmax_running_min+0x2f/0x60
[   12.718248]  ? dst_alloc+0x4c/0xa0
[   12.718964]  ? tcp_ack+0xaf1/0x10b0
[   12.719658]  ? tipc_udp_is_known_peer+0xa0/0xa0 [tipc]
[   12.720634]  tipc_udp_recv+0x71/0x1d0 [tipc]
[   12.721459]  ? dst_alloc+0x4c/0xa0
[   12.722130]  udp_queue_rcv_skb+0x264/0x490
[   12.722924]  __udp4_lib_rcv+0x21e/0x990
[   12.723670]  ? ip_route_input_rcu+0x2dd/0xbf0
[   12.724442]  ? tcp_v4_rcv+0x958/0xa40
[   12.725039]  udp_rcv+0x1a/0x20
[   12.725587]  ip_local_deliver_finish+0x97/0x1d0
[   12.726323]  ip_local_deliver+0xaf/0xc0
[   12.726959]  ? ip_route_input_noref+0x19/0x20
[   12.727689]  ip_rcv_finish+0xdd/0x3b0
[   12.728307]  ip_rcv+0x2ac/0x360
[   12.728839]  __netif_receive_skb_core+0x6fb/0xa90
[   12.729580]  ? udp4_gro_receive+0x1a7/0x2c0
[   12.730274]  __netif_receive_skb+0x1d/0x60
[   12.730953]  ? __netif_receive_skb+0x1d/0x60
[   12.731637]  netif_receive_skb_internal+0x37/0xd0
[   12.732371]  napi_gro_receive+0xc7/0xf0
[   12.732920]  receive_buf+0x3c3/0xd40
[   12.733441]  virtnet_poll+0xb1/0x250
[   12.733944]  net_rx_action+0x23e/0x370
[   12.734476]  __do_softirq+0xc5/0x2f8
[   12.734922]  irq_exit+0xfa/0x100
[   12.735315]  do_IRQ+0x4f/0xd0
[   12.735680]  common_interrupt+0xa2/0xa2
[   12.736126]  </IRQ>
[   12.736416] RIP: 0010:native_safe_halt+0x6/0x10
[   12.736925] RSP: 0018:ffffa41cc0cafe90 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff4d
[   12.737756] RAX: 0000000000000000 RBX: ffff8c2a761edb80 RCX: 0000000000000000
[   12.738504] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[   12.739258] RBP: ffffa41cc0cafe90 R08: 0000014b5b9795e5 R09: ffffa41cc12c7e88
[   12.740118] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
[   12.740964] R13: ffff8c2a761edb80 R14: 0000000000000000 R15: 0000000000000000
[   12.741831]  default_idle+0x2a/0x100
[   12.742323]  arch_cpu_idle+0xf/0x20
[   12.742796]  default_idle_call+0x28/0x40
[   12.743312]  do_idle+0x179/0x1f0
[   12.743761]  cpu_startup_entry+0x1d/0x20
[   12.744291]  start_secondary+0x112/0x120
[   12.744816]  secondary_startup_64+0xa5/0xa5
[   12.745367] Code: b9 f4 01 00 00 48 89 c2 48 c1 ea 02 48 3d d3 07 00
00 48 0f 47 d1 49 8b 0c 24 48 39 d1 76 07 49 89 14 24 48 89 d1 31 d2 48
89 df <48> f7 f1 89 c6 e8 81 6e ff ff 5b 41 5c 5d c3 66 90 66 2e 0f 1f
[   12.747527] RIP: tipc_node_calculate_timer.isra.12+0x45/0x60 [tipc] RSP: ffff8c2a7fc838a0
[   12.748555] ---[ end trace 1399ab83390650fd ]---
[   12.749296] Kernel panic - not syncing: Fatal exception in interrupt
[   12.750123] Kernel Offset: 0x13200000 from 0xffffffff82000000
(relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[   12.751215] Rebooting in 60 seconds..

Fixes: c9b64d492b ("tipc: add replicast peer discovery")
Signed-off-by: Tommi Rantala <tommi.t.rantala@nokia.com>
Cc: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:14:22 -05:00
Eric Dumazet
cfac7f836a tcp/dccp: block bh before arming time_wait timer
Maciej Żenczykowski reported some panics in tcp_twsk_destructor()
that might be caused by the following bug.

timewait timer is pinned to the cpu, because we want to transition
timwewait refcount from 0 to 4 in one go, once everything has been
initialized.

At the time commit ed2e923945 ("tcp/dccp: fix timewait races in timer
handling") was merged, TCP was always running from BH habdler.

After commit 5413d1babe ("net: do not block BH while processing
socket backlog") we definitely can run tcp_time_wait() from process
context.

We need to block BH in the critical section so that the pinned timer
has still its purpose.

This bug is more likely to happen under stress and when very small RTO
are used in datacenter flows.

Fixes: 5413d1babe ("net: do not block BH while processing socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Maciej Żenczykowski <maze@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:07:43 -05:00
David S. Miller
b484d8a53e Merge branch 'sctp-prsctp-chunk-fixes'
Xin Long says:

====================
sctp: a couple of fixes for chunks abandoned in prsctp

Now when abandoning chunks in prsctp, it doesn't consider for frags in
one msg, which would cause peer can never receive the whole frags for
one msg to get them reassembled, these pieces of this msg will stay in
the reasm queue forever and block the following chunks' receiving.

This patchset is to fix them in patch 2 and 3, and also fix another
issue for prsctp in patch 1.
====================

Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:06:31 -05:00
Xin Long
779edd7348 sctp: do not abandon the other frags in unsent outq if one msg has outstanding frags
Now for the abandoned chunks in unsent outq, it would just free the chunks.
Because no tsn is assigned to them yet, there's no need to send fwd tsn to
peer, unlike for the abandoned chunks in sent outq.

The problem is when parts of the msg have been sent and the other frags
are still in unsent outq, if they are abandoned/dropped, the peer would
never get this msg reassembled.

So these frags in unsent outq can't be dropped if this msg already has
outstanding frags.

This patch does the check in sctp_chunk_abandoned and
sctp_prsctp_prune_unsent.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:06:24 -05:00
Xin Long
e5f612969c sctp: abandon the whole msg if one part of a fragmented message is abandoned
As rfc3758#section-3.1 demands:

   A3) When a TSN is "abandoned", if it is part of a fragmented message,
       all other TSN's within that fragmented message MUST be abandoned
       at the same time.

Besides, if it couldn't handle this, the rest frags would never get
assembled in peer side.

This patch supports it by adding abandoned flag in sctp_datamsg, when
one chunk is being abandoned, set chunk->msg->abandoned as well. Next
time when checking for abandoned, go checking chunk->msg->abandoned
first.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:06:24 -05:00
Xin Long
d30fc5126e sctp: only update outstanding_bytes for transmitted queue when doing prsctp_prune
Now outstanding_bytes is only increased when appending chunks into one
packet and sending it at 1st time, while decreased when it is about to
move into retransmit queue. It means outstanding_bytes value is already
decreased for all chunks in retransmit queue.

However sctp_prsctp_prune_sent is a common function to check the chunks
in both transmitted and retransmit queue, it decrease outstanding_bytes
when moving a chunk into abandoned queue from either of them.

It could cause outstanding_bytes underflow, as it also decreases it's
value for the chunks in retransmit queue.

This patch fixes it by only updating outstanding_bytes for transmitted
queue when pruning queues for prsctp prio policy, the same fix is also
needed in sctp_check_transmitted.

Fixes: 8dbdf1f5b0 ("sctp: implement prsctp PRIO policy")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01 15:06:23 -05:00
Daniel Borkmann
4485166519 Merge branch 'bpf-nfp-jmp-memcpy-improvements'
Jiong Wang says:

====================
Currently, compiler will lower memcpy function call in XDP/eBPF C program
into a sequence of eBPF load/store pairs for some scenarios.

Compiler is thinking this "inline" optimiation is beneficial as it could
avoid function call and also increase code locality.

However, Netronome NPU is not an tranditional load/store architecture that
doing a sequence of individual load/store actions are not efficient.

This patch set tries to identify the load/store sequences composed of
load/store pairs that comes from memcpy lowering, then accelerates them
through NPU's Command Push Pull (CPP) instruction.

This patch set registered an new optimization pass before doing the actual
JIT work, it traverse through eBPF IR, once found candidate sequence then
record the memory copy source, destination and length information in the
first load instruction starting the sequence and marks all remaining
instructions in the sequence into skipable status. Later, when JITing the
first load instructoin, optimal instructions will be generated using those
record information.

For this safety of this transformation:

  - jump into the middle of the sequence will cancel the optimization.

  - overlapped memory access will cancel the optimization.

  - the load destination register still contains the same value as before
    the transformation.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:21 +01:00
Jiong Wang
6bc7103c89 nfp: bpf: detect load/store sequences lowered from memory copy
This patch add the optimization frontend, but adding a new eBPF IR scan
pass "nfp_bpf_opt_ldst_gather".

The pass will traverse the IR to recognize the load/store pairs sequences
that come from lowering of memory copy builtins.

The gathered memory copy information will be kept in the meta info
structure of the first load instruction in the sequence and will be
consumed by the optimization backend added in the previous patches.

NOTE: a sequence with cross memory access doesn't qualify this
optimization, i.e. if one load in the sequence will load from place that
has been written by previous store. This is because when we turn the
sequence into single CPP operation, we are reading all contents at once
into NFP transfer registers, then write them out as a whole. This is not
identical with what the original load/store sequence is doing.

Detecting cross memory access for two random pointers will be difficult,
fortunately under XDP/eBPF's restrictied runtime environment, the copy
normally happen among map, packet data and stack, they do not overlap with
each other.

And for cases supported by NFP, cross memory access will only happen on
PTR_TO_PACKET. Fortunately for this, there is ID information that we could
do accurate memory alias check.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
8c90053858 nfp: bpf: implement memory bulk copy for length bigger than 32-bytes
When the gathered copy length is bigger than 32-bytes and within 128-bytes
(the maximum length a single CPP Pull/Push request can finish), the
strategy of read/write are changeed into:

  * Read.
      - use direct reference mode when length is within 32-bytes.
      - use indirect mode when length is bigger than 32-bytes.

  * Write.
      - length <= 8-bytes
        use write8 (direct_ref).
      - length <= 32-byte and 4-bytes aligned
        use write32 (direct_ref).
      - length <= 32-bytes but not 4-bytes aligned
        use write8 (indirect_ref).
      - length > 32-bytes and 4-bytes aligned
        use write32 (indirect_ref).
      - length > 32-bytes and not 4-bytes aligned and <= 40-bytes
        use write32 (direct_ref) to finish the first 32-bytes.
        use write8 (direct_ref) to finish all remaining hanging part.
      - length > 32-bytes and not 4-bytes aligned
        use write32 (indirect_ref) to finish those 4-byte aligned parts.
        use write8 (direct_ref) to finish all remaining hanging part.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
9879a3814b nfp: bpf: implement memory bulk copy for length within 32-bytes
For NFP, we want to re-group a sequence of load/store pairs lowered from
memcpy/memmove into single memory bulk operation which then could be
accelerated using NFP CPP bus.

This patch extends the existing load/store auxiliary information by adding
two new fields:

	struct bpf_insn *paired_st;
	s16 ldst_gather_len;

Both fields are supposed to be carried by the the load instruction at the
head of the sequence. "paired_st" is the corresponding store instruction at
the head and "ldst_gather_len" is the gathered length.

If "ldst_gather_len" is negative, then the sequence is doing memory
load/store in descending order, otherwise it is in ascending order. We need
this information to detect overlapped memory access.

This patch then optimize memory bulk copy when the copy length is within
32-bytes.

The strategy of read/write used is:

  * Read.
    Use read32 (direct_ref), always.

  * Write.
    - length <= 8-bytes
      write8 (direct_ref).
    - length <= 32-bytes and is 4-byte aligned
      write32 (direct_ref).
    - length <= 32-bytes but is not 4-byte aligned
      write8 (indirect_ref).

NOTE: the optimization should not change program semantics. The destination
register of the last load instruction should contain the same value before
and after this optimization.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
5e4d6d2093 nfp: bpf: factor out is_mbpf_load & is_mbpf_store
It is usual that we need to check if one BPF insn is for loading/storeing
data from/to memory.

Therefore, it makes sense to factor out related code to become common
helper functions.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jakub Kicinski
5468a8b929 nfp: bpf: encode indirect commands
Add support for emitting commands with field overwrites.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
3239e7bb28 nfp: bpf: correct the encoding for No-Dest immed
When immed is used with No-Dest, the emitter should use reg.dst instead of
reg.areg for the destination, using the latter will actually encode
register zero.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
08859f159e nfp: bpf: relax source operands check
The NFP normally requires the source operands to be difference addressing
modes, but we should rule out the very special NN_REG_NONE type.

There are instruction that ignores both A/B operands, for example:

  local_csr_rd

For these instructions, we might pass the same operand type, NN_REG_NONE,
for both A/B operands.

NOTE: in current NFP ISA, it is only possible for instructions with
unrestricted operands to take none operands, but in case there is new and
similar instructoin in restricted form, they would follow similar rules,
so swreg_to_restricted is updated as well.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
29fe46efba nfp: bpf: don't do ld/shifts combination if shifts are jump destination
If any of the shift insns in the ld/shift sequence is jump destination,
don't do combination.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
1266f5d655 nfp: bpf: don't do ld/mask combination if mask is jump destination
If the mask insn in the ld/mask pair is jump destination, then don't do
combination.

Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:20 +01:00
Jiong Wang
a09d5c52c4 nfp: bpf: flag jump destination to guide insn combine optimizations
NFP eBPF offload JIT engine is doing some instruction combine based
optimizations which however must not be safe if the combined sequences
are across basic block boarders.

Currently, there are post checks during fixing jump destinations. If the
jump destination is found to be eBPF insn that has been combined into
another one, then JIT engine will raise error and abort.

This is not optimal. The JIT engine ought to disable the optimization on
such cross-bb-border sequences instead of abort.

As there is no control flow information in eBPF infrastructure that we
can't do basic block based optimizations, this patch extends the existing
jump destination record pass to also flag the jump destination, then in
instruction combine passes we could skip the optimizations if insns in the
sequence are jump targets.

Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:19 +01:00
Jiong Wang
5b674140ad nfp: bpf: record jump destination to simplify jump fixup
eBPF insns are internally organized as dual-list inside NFP offload JIT.
Random access to an insn needs to be done by either forward or backward
traversal along the list.

One place we need to do such traversal is at nfp_fixup_branches where one
traversal is needed for each jump insn to find the destination. Such
traversals could be avoided if jump destinations are collected through a
single travesal in a pre-scan pass, and such information could also be
useful in other places where jump destination info are needed.

This patch adds such jump destination collection in nfp_prog_prepare.

Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:19 +01:00
Jiong Wang
854dc87d1a nfp: bpf: support backward jump
This patch adds support for backward jump on NFP.

  - restrictions on backward jump in various functions have been removed.
  - nfp_fixup_branches now supports backward jump.

There is one thing to note, currently an input eBPF JMP insn may generate
several NFP insns, for example,

  NFP imm move insn A \
  NFP compare insn  B  --> 3 NFP insn jited from eBPF JMP insn M
  NFP branch insn   C /
  ---
  NFP insn X           --> 1 NFP insn jited from eBPF insn N
  ---
  ...

therefore, we are doing sanity check to make sure the last jited insn from
an eBPF JMP is a NFP branch instruction.

Once backward jump is allowed, it is possible an eBPF JMP insn is at the
end of the program. This is however causing trouble for the sanity check.
Because the sanity check requires the end index of the NFP insns jited from
one eBPF insn while only the start index is recorded before this patch that
we can only get the end index by:

  start_index_of_the_next_eBPF_insn - 1

or for the above example:

  start_index_of_eBPF_insn_N (which is the index of NFP insn X) - 1

nfp_fixup_branches was using nfp_for_each_insn_walk2 to expose *next* insn
to each iteration during the traversal so the last index could be
calculated from which. Now, it needs some extra code to handle the last
insn. Meanwhile, the use of walk2 is actually unnecessary, we could simply
use generic single instruction walk to do this, the next insn could be
easily calculated using list_next_entry.

So, this patch migrates the jump fixup traversal method to
*list_for_each_entry*, this simplifies the code logic a little bit.

The other thing to note is a new state variable "last_bpf_off" is
introduced to track the index of the last jited NFP insn. This is necessary
because NFP is generating special purposes epilogue sequences, so the index
of the last jited NFP insn is *not* always nfp_prog->prog_len - 1.

Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01 20:59:19 +01:00