Commit Graph

244 Commits

Author SHA1 Message Date
Hariprasad Shenai
031cf4769b cxgb4/iw_cxgb4: display TPTE on errors
With ingress WRITE or READ RESPONSE errors, HW provides the offending
stag from the packet.  This patch adds logic to log the parsed TPTE
in this case. cxgb4 now exports a function to read a TPTE entry
from adapter memory.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-15 16:25:16 -07:00
Hariprasad Shenai
4c2c576322 cxgb4/iw_cxgb4: use firmware ord/ird resource limits
Advertise a larger max read queue depth for qps, and gather the resource limits
from fw and use them to avoid exhaustinq all the resources.

Design:

cxgb4:

Obtain the max_ordird_qp and max_ird_adapter device params from FW
at init time and pass them up to the ULDs when they attach.  If these
parameters are not available, due to older firmware, then hard-code
the values based on the known values for older firmware.
iw_cxgb4:

Fix the c4iw_query_device() to report these correct values based on
adapter parameters.  ibv_query_device() will always return:

max_qp_rd_atom = max_qp_init_rd_atom = min(module_max, max_ordird_qp)
max_res_rd_atom = max_ird_adapter

Bump up the per qp max module option to 32, allowing it to be increased
by the user up to the device max of max_ordird_qp.  32 seems to be
sufficient to maximize throughput for streaming read benchmarks.

Fail connection setup if the negotiated IRD exhausts the available
adapter ird resources.  So the driver will track the amount of ird
resource in use and not send an RI_WR/INIT to FW that would reduce the
available ird resources below zero.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-15 16:25:16 -07:00
Hariprasad Shenai
04e10e2164 iw_cxgb4: Detect Ing. Padding Boundary at run-time
Updates iw_cxgb4 to determine the Ingress Padding Boundary from
cxgb4_lld_info, and take subsequent actions.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-15 16:25:16 -07:00
Fabian Frederick
9f16dc2ec7 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c: remove unnecessary null test before debugfs_remove_recursive
Fix checkpatch warning:
"WARNING: debugfs_remove_recursive(NULL) is safe this check is probably not required"

Cc: Hariprasad S <hariprasad@chelsio.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-02 17:04:42 -07:00
Hariprasad Shenai
dde3aadf53 cxgb4vf: Adds device ID for few more Chelsio T4 Adapters
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-01 18:56:10 -07:00
Hariprasad Shenai
fb1e933d3c cxgb4: Adds device ID for few more Chelsio T4 Adapters
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-01 18:56:10 -07:00
Hariprasad Shenai
fc5ab02096 cxgb4: Replaced the backdoor mechanism to access the HW memory with PCIe Window method
Rip out a bunch of redundant PCI-E Memory Window Read/Write routines,
collapse the more general purpose routines into a single routine
thereby eliminating the need for a large stack frame (and extra data
copying) in the outer routine, change everything to use the improved
routine t4_memory_rw.

Based on origninal work by Casey Leedom <leedom@chelsio.com> and
Steve Wise <swise@opengridcomputing.com>

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-01 18:56:10 -07:00
Hariprasad Shenai
0abfd1524b cxgb4: Use FW interface to get BAR0 value
Use the firmware interface to get the BAR0 value since we really don't want
to use the PCI-E Configuration Space Backdoor access which is owned by the
firmware.

Set up PCI-E Memory Window registers using the true values programmed into
BAR registers.  When the PF4 "Master Function" is exported to a Virtual
Machine, the values returned by pci_resource_start() will be for the
synthetic PCI-E Configuration Space and not the real addresses. But we need
to program the PCI-E Memory Window address decoders with the real addresses
that we're going to be using in order to have accesses through the Memory
Windows work.

Based on origninal work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-01 18:56:10 -07:00
Hariprasad Shenai
35b1de5579 rdma/cxgb4: Fixes cxgb4 probe failure in VM when PF is exposed through PCI Passthrough
Change logic which determines our Physical Function at PCI Probe time.
Now we read the PL_WHOAMI register and get the Physical Function.

Pass Physical Function to Upper Layer Drivers in lld_info structure in the
new field "pf" added to lld_info.  This is useful for the cases where the
PF, say PF4, is attached to a Virtual Machine via some form of "PCI
Pass Through" technology and the PCI Function shows up as PF0 in the VM.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-01 18:56:10 -07:00
David S. Miller
9b8d90b963 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2014-06-25 22:40:43 -07:00
Thadeu Lima de Souza Cascardo
40c9f8ab6c cxgb4: use dev_port to identify ports
Commit 3f85944fe2 ("net: Add sysfs file
for port number") introduce dev_port to network devices. cxgb4 adapters
have multiple ports on the same PCI function, and used dev_id to
identify those ports. That use was removed by commit
8c367fcbe6 ("cxgb4: Do not set
net_device::dev_id to VI index"), since dev_id should be used only when
devices share the same MAC address.

Using dev_port for cxgb4 allows different ports on the same PCI function
to be identified.

Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-25 16:01:54 -07:00
Li RongQing
ee9a33b263 cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
in both thread and softirq context, but not disable BH, the lockdep report is
below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
not need to hold spin lock again.
	=================================
	[ INFO: inconsistent lock state ]
	3.14.7+ #24 Tainted: G         C O
	---------------------------------
	inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
	radvd/3794 [HC0[0]:SC1[1]:HE1:SE0] takes:
	 (adap_rcu_lock){+.?...}, at: [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	{SOFTIRQ-ON-W} state was registered at:
	  [<ffffffff810fca81>] __lock_acquire+0x34a/0xe48
	  [<ffffffff810fd98b>] lock_acquire+0x82/0x9d
	  [<ffffffff815d6ff8>] _raw_spin_lock+0x34/0x43
	  [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	  [<ffffffffa0998beb>] cxgb4_inet6addr_handler+0x117/0x12c [cxgb4]
	  [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	  [<ffffffff815da9f9>] __atomic_notifier_call_chain+0x44/0x6e
	  [<ffffffff815daa32>] atomic_notifier_call_chain+0xf/0x11
	  [<ffffffff815b1356>] inet6addr_notifier_call_chain+0x16/0x18
	  [<ffffffffa01f72e5>] ipv6_add_addr+0x404/0x46e [ipv6]
	  [<ffffffffa01f8df0>] addrconf_add_linklocal+0x5f/0x95 [ipv6]
	  [<ffffffffa01fc3e9>] addrconf_notify+0x632/0x841 [ipv6]
	  [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	  [<ffffffff810e09a1>] __raw_notifier_call_chain+0x9/0xb
	  [<ffffffff810e09b2>] raw_notifier_call_chain+0xf/0x11
	  [<ffffffff8151b3b7>] call_netdevice_notifiers_info+0x4e/0x56
	  [<ffffffff8151b3d0>] call_netdevice_notifiers+0x11/0x13
	  [<ffffffff8151c0a6>] netdev_state_change+0x1f/0x38
	  [<ffffffff8152f004>] linkwatch_do_dev+0x3b/0x49
	  [<ffffffff8152f184>] __linkwatch_run_queue+0x10b/0x144
	  [<ffffffff8152f1dd>] linkwatch_event+0x20/0x27
	  [<ffffffff810d7bc0>] process_one_work+0x1cb/0x2ee
	  [<ffffffff810d7e3b>] worker_thread+0x12e/0x1fc
	  [<ffffffff810dd391>] kthread+0xc4/0xcc
	  [<ffffffff815dc48c>] ret_from_fork+0x7c/0xb0
	irq event stamp: 3388
	hardirqs last  enabled at (3388): [<ffffffff810c6c85>]
	__local_bh_enable_ip+0xaa/0xd9
	hardirqs last disabled at (3387): [<ffffffff810c6c2d>]
	__local_bh_enable_ip+0x52/0xd9
	softirqs last  enabled at (3288): [<ffffffffa01f1d5b>]
	rcu_read_unlock_bh+0x0/0x2f [ipv6]
	softirqs last disabled at (3289): [<ffffffff815ddafc>]
	do_softirq_own_stack+0x1c/0x30

	other info that might help us debug this:
	 Possible unsafe locking scenario:

	       CPU0
	       ----
	  lock(adap_rcu_lock);
	  <Interrupt>
	    lock(adap_rcu_lock);

	 *** DEADLOCK ***

	5 locks held by radvd/3794:
	 #0:  (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffffa020b85a>]
	rawv6_sendmsg+0x74b/0xa4d [ipv6]
	 #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff8151ac6b>]
	rcu_lock_acquire+0x0/0x29
	 #2:  (rcu_read_lock){.+.+..}, at: [<ffffffffa01f4cca>]
	rcu_lock_acquire.constprop.16+0x0/0x30 [ipv6]
	 #3:  (rcu_read_lock){.+.+..}, at: [<ffffffff810e09b4>]
	rcu_lock_acquire+0x0/0x29
	 #4:  (rcu_read_lock){.+.+..}, at: [<ffffffffa0998782>]
	rcu_lock_acquire.constprop.40+0x0/0x30 [cxgb4]

	stack backtrace:
	CPU: 7 PID: 3794 Comm: radvd Tainted: G         C O 3.14.7+ #24
	Hardware name: Supermicro X7DBU/X7DBU, BIOS 6.00 12/03/2007
	 ffffffff81f15990 ffff88012fdc36a8 ffffffff815d0016 0000000000000006
	 ffff8800c80dc2a0 ffff88012fdc3708 ffffffff815cc727 0000000000000001
	 0000000000000001 ffff880100000000 ffffffff81015b02 ffff8800c80dcb58
	Call Trace:
	 <IRQ>  [<ffffffff815d0016>] dump_stack+0x4e/0x71
	 [<ffffffff815cc727>] print_usage_bug+0x1ec/0x1fd
	 [<ffffffff81015b02>] ? save_stack_trace+0x27/0x44
	 [<ffffffff810fbfaa>] ? check_usage_backwards+0xa0/0xa0
	 [<ffffffff810fc640>] mark_lock+0x11b/0x212
	 [<ffffffff810fca0b>] __lock_acquire+0x2d4/0xe48
	 [<ffffffff810fbfaa>] ? check_usage_backwards+0xa0/0xa0
	 [<ffffffff810fbff6>] ? check_usage_forwards+0x4c/0xa6
	 [<ffffffff810c6c8a>] ? __local_bh_enable_ip+0xaf/0xd9
	 [<ffffffff810fd98b>] lock_acquire+0x82/0x9d
	 [<ffffffffa09989ea>] ? clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa0998782>] ? rcu_read_unlock+0x23/0x23 [cxgb4]
	 [<ffffffff815d6ff8>] _raw_spin_lock+0x34/0x43
	 [<ffffffffa09989ea>] ? clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa09987b0>] ? rcu_lock_acquire.constprop.40+0x2e/0x30 [cxgb4]
	 [<ffffffffa0998782>] ? rcu_read_unlock+0x23/0x23 [cxgb4]
	 [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa0998beb>] cxgb4_inet6addr_handler+0x117/0x12c [cxgb4]
	 [<ffffffff810fd99d>] ? lock_acquire+0x94/0x9d
	 [<ffffffff810e09b4>] ? raw_notifier_call_chain+0x11/0x11
	 [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	 [<ffffffff815da9f9>] __atomic_notifier_call_chain+0x44/0x6e
	 [<ffffffff815daa32>] atomic_notifier_call_chain+0xf/0x11
	 [<ffffffff815b1356>] inet6addr_notifier_call_chain+0x16/0x18
	 [<ffffffffa01f72e5>] ipv6_add_addr+0x404/0x46e [ipv6]
	 [<ffffffff810fde6a>] ? trace_hardirqs_on+0xd/0xf
	 [<ffffffffa01fb634>] addrconf_prefix_rcv+0x385/0x6ea [ipv6]
	 [<ffffffffa0207950>] ndisc_rcv+0x9d3/0xd76 [ipv6]
	 [<ffffffffa020d536>] icmpv6_rcv+0x592/0x67b [ipv6]
	 [<ffffffff810c6c85>] ? __local_bh_enable_ip+0xaa/0xd9
	 [<ffffffff810c6c85>] ? __local_bh_enable_ip+0xaa/0xd9
	 [<ffffffff810fd8dc>] ? lock_release+0x14e/0x17b
	 [<ffffffffa020df97>] ? rcu_read_unlock+0x21/0x23 [ipv6]
	 [<ffffffff8150df52>] ? rcu_read_unlock+0x23/0x23
	 [<ffffffffa01f4ede>] ip6_input_finish+0x1e4/0x2fc [ipv6]
	 [<ffffffffa01f540b>] ip6_input+0x33/0x38 [ipv6]
	 [<ffffffffa01f5557>] ip6_mc_input+0x147/0x160 [ipv6]
	 [<ffffffffa01f4ba3>] ip6_rcv_finish+0x7c/0x81 [ipv6]
	 [<ffffffffa01f5397>] ipv6_rcv+0x3a1/0x3e2 [ipv6]
	 [<ffffffff8151ef96>] __netif_receive_skb_core+0x4ab/0x511
	 [<ffffffff810fdc94>] ? mark_held_locks+0x71/0x99
	 [<ffffffff8151f0c0>] ? process_backlog+0x69/0x15e
	 [<ffffffff8151f045>] __netif_receive_skb+0x49/0x5b
	 [<ffffffff8151f0cf>] process_backlog+0x78/0x15e
	 [<ffffffff8151f571>] ? net_rx_action+0x1a2/0x1cc
	 [<ffffffff8151f47b>] net_rx_action+0xac/0x1cc
	 [<ffffffff810c69b7>] ? __do_softirq+0xad/0x218
	 [<ffffffff810c69ff>] __do_softirq+0xf5/0x218
	 [<ffffffff815ddafc>] do_softirq_own_stack+0x1c/0x30
	 <EOI>  [<ffffffff810c6bb6>] do_softirq+0x38/0x5d
	 [<ffffffffa01f1d5b>] ? ip6_copy_metadata+0x156/0x156 [ipv6]
	 [<ffffffff810c6c78>] __local_bh_enable_ip+0x9d/0xd9
	 [<ffffffffa01f1d88>] rcu_read_unlock_bh+0x2d/0x2f [ipv6]
	 [<ffffffffa01f28b4>] ip6_finish_output2+0x381/0x3d8 [ipv6]
	 [<ffffffffa01f49ef>] ip6_finish_output+0x6e/0x73 [ipv6]
	 [<ffffffffa01f4a70>] ip6_output+0x7c/0xa8 [ipv6]
	 [<ffffffff815b1bfa>] dst_output+0x18/0x1c
	 [<ffffffff815b1c9e>] ip6_local_out+0x1c/0x21
	 [<ffffffffa01f2489>] ip6_push_pending_frames+0x37d/0x427 [ipv6]
	 [<ffffffff81558af8>] ? skb_orphan+0x39/0x39
	 [<ffffffffa020b85a>] ? rawv6_sendmsg+0x74b/0xa4d [ipv6]
	 [<ffffffffa020ba51>] rawv6_sendmsg+0x942/0xa4d [ipv6]
	 [<ffffffff81584cd2>] inet_sendmsg+0x3d/0x66
	 [<ffffffff81508930>] __sock_sendmsg_nosec+0x25/0x27
	 [<ffffffff8150b0d7>] sock_sendmsg+0x5a/0x7b
	 [<ffffffff810fd8dc>] ? lock_release+0x14e/0x17b
	 [<ffffffff8116d756>] ? might_fault+0x9e/0xa5
	 [<ffffffff8116d70d>] ? might_fault+0x55/0xa5
	 [<ffffffff81508cb1>] ? copy_from_user+0x2a/0x2c
	 [<ffffffff8150b70c>] ___sys_sendmsg+0x226/0x2d9
	 [<ffffffff810fcd25>] ? __lock_acquire+0x5ee/0xe48
	 [<ffffffff810fde01>] ? trace_hardirqs_on_caller+0x145/0x1a1
	 [<ffffffff8118efcb>] ? slab_free_hook.isra.71+0x50/0x59
	 [<ffffffff8115c81f>] ? release_pages+0xbc/0x181
	 [<ffffffff810fd99d>] ? lock_acquire+0x94/0x9d
	 [<ffffffff81115e97>] ? read_seqcount_begin.constprop.25+0x73/0x90
	 [<ffffffff8150c408>] __sys_sendmsg+0x3d/0x5b
	 [<ffffffff8150c433>] SyS_sendmsg+0xd/0x19
	 [<ffffffff815dc53d>] system_call_fastpath+0x1a/0x1f

Reported-by: Ben Greear <greearb@candelatech.com>
Cc: Casey Leedom <leedom@chelsio.com>
Cc: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-24 15:51:49 -07:00
Anish Bhatt
5433ba365f cxgb4: Fix endian bug introduced in cxgb4 dcb patchset
Hi,
 This patch fixes warnings generated by sparse as pointed out by kbuild test
 robot, please apply to net-next. Applies on top of
commit 79631c89ed ("trivial: net/irda/irlmp.c:
Fix closing brace followed by if")
-Anish

v2: cleanup submission as per davem's feedback

Fixes: 76bcb31efc ("cxgb4 : Add DCBx support codebase  and dcbnl_ops")
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-24 12:54:52 -07:00
Anish Bhatt
ce100b8b81 cxgb4 : Update copyright year on all cxgb4 files
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-22 21:13:33 -07:00
Anish Bhatt
19f43d1aa6 cxgb4 : Makefile & Kconfig changes for DCBx support
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-22 21:13:33 -07:00
Anish Bhatt
688848b149 cxgb4 : Integrate DCBx support into cxgb4 module. Register dbcnl_ops to give access to DCBx functions
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-22 21:13:33 -07:00
Anish Bhatt
76bcb31efc cxgb4 : Add DCBx support codebase and dcbnl_ops
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-22 21:13:33 -07:00
Anish Bhatt
989594e2f2 cxgb4 : Update fw interface file for DCBx support. Adds all the required fields to fw interface to communicate DCBx info
Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-22 21:13:33 -07:00
Linus Torvalds
f9da455b93 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:

 1) Seccomp BPF filters can now be JIT'd, from Alexei Starovoitov.

 2) Multiqueue support in xen-netback and xen-netfront, from Andrew J
    Benniston.

 3) Allow tweaking of aggregation settings in cdc_ncm driver, from Bjørn
    Mork.

 4) BPF now has a "random" opcode, from Chema Gonzalez.

 5) Add more BPF documentation and improve test framework, from Daniel
    Borkmann.

 6) Support TCP fastopen over ipv6, from Daniel Lee.

 7) Add software TSO helper functions and use them to support software
    TSO in mvneta and mv643xx_eth drivers.  From Ezequiel Garcia.

 8) Support software TSO in fec driver too, from Nimrod Andy.

 9) Add Broadcom SYSTEMPORT driver, from Florian Fainelli.

10) Handle broadcasts more gracefully over macvlan when there are large
    numbers of interfaces configured, from Herbert Xu.

11) Allow more control over fwmark used for non-socket based responses,
    from Lorenzo Colitti.

12) Do TCP congestion window limiting based upon measurements, from Neal
    Cardwell.

13) Support busy polling in SCTP, from Neal Horman.

14) Allow RSS key to be configured via ethtool, from Venkata Duvvuru.

15) Bridge promisc mode handling improvements from Vlad Yasevich.

16) Don't use inetpeer entries to implement ID generation any more, it
    performs poorly, from Eric Dumazet.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1522 commits)
  rtnetlink: fix userspace API breakage for iproute2 < v3.9.0
  tcp: fixing TLP's FIN recovery
  net: fec: Add software TSO support
  net: fec: Add Scatter/gather support
  net: fec: Increase buffer descriptor entry number
  net: fec: Factorize feature setting
  net: fec: Enable IP header hardware checksum
  net: fec: Factorize the .xmit transmit function
  bridge: fix compile error when compiling without IPv6 support
  bridge: fix smatch warning / potential null pointer dereference
  via-rhine: fix full-duplex with autoneg disable
  bnx2x: Enlarge the dorq threshold for VFs
  bnx2x: Check for UNDI in uncommon branch
  bnx2x: Fix 1G-baseT link
  bnx2x: Fix link for KR with swapped polarity lane
  sctp: Fix sk_ack_backlog wrap-around problem
  net/core: Add VF link state control policy
  net/fsl: xgmac_mdio is dependent on OF_MDIO
  net/fsl: Make xgmac_mdio read error message useful
  net_sched: drr: warn when qdisc is not work conserving
  ...
2014-06-12 14:27:40 -07:00
Hariprasad Shenai
c887ad0e22 cxgb4: Change default Interrupt Holdoff Packet Count Threshold
Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-10 22:49:55 -07:00
Hariprasad Shenai
b408ff282d iw_cxgb4: don't truncate the recv window size
Fixed a bug that shows up with recv window sizes that exceed the size of
the RCV_BUFSIZ field in opt0 (>= 1024K).  If the recv window exceeds
this, then we specify the max possible in opt0, add add the rest in via
a RX_DATA_ACK credits.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-10 22:49:54 -07:00
Hariprasad Shenai
92e7ae7172 iw_cxgb4: Choose appropriate hw mtu index and ISS for iWARP connections
Select the appropriate hw mtu index and initial sequence number to optimize
hw memory performance.

Add new cxgb4_best_aligned_mtu() which allows callers to provide enough
information to be used to [possibly] select an MTU which will result in the
TCP Data Segment Size (AKA Maximum Segment Size) to be an aligned value.

If an RTR message exhange is required, then align the ISS to 8B - 1 + 4, so
that after the SYN the send seqno will align on a 4B boundary. The RTR
message exchange will leave the send seqno aligned on an 8B boundary.
If an RTR is not required, then align the ISS to 8B - 1.  The goal is
to have the send seqno be 8B aligned when we send the first FPDU.

Based on original work by Casey Leedom <leeedom@chelsio.com> and
Steve Wise <swise@opengridcomputing.com>

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-10 22:49:54 -07:00
Hariprasad Shenai
cf38be6d61 iw_cxgb4: Allocate and use IQs specifically for indirect interrupts
Currently indirect interrupts for RDMA CQs funnel through the LLD's RDMA
RXQs, which also handle direct interrupts for offload CPLs during RDMA
connection setup/teardown.  The intended T4 usage model, however, is to
have indirect interrupts flow through dedicated IQs. IE not to mix
indirect interrupts with CPL messages in an IQ.  This patch adds the
concept of RDMA concentrator IQs, or CIQs, setup and maintained by the
LLD and exported to iw_cxgb4 for use when creating CQs. RDMA CPLs will
flow through the LLD's RDMA RXQs, and CQ interrupts flow through the
CIQs.

Design:

cxgb4 creates and exports an array of CIQs for the RDMA ULD.  These IQs
are sized according to the max available CQs available at adapter init.
In addition, these IQs don't need FL buffers since they only service
indirect interrupts.  One CIQ is setup per RX channel similar to the
RDMA RXQs.

iw_cxgb4 will utilize these CIQs based on the vector value passed into
create_cq().  The num_comp_vectors advertised by iw_cxgb4 will be the
number of CIQs configured, and thus the vector value will be the index
into the array of CIQs.

Based on original work by Steve Wise <swise@opengridcomputing.com>

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-10 22:49:54 -07:00
Jiri Pirko
537fae0101 net: use SPEED_UNKNOWN and DUPLEX_UNKNOWN when appropriate
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-06 16:24:07 -07:00
Linus Torvalds
776edb5931 Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next
Pull core locking updates from Ingo Molnar:
 "The main changes in this cycle were:

   - reduced/streamlined smp_mb__*() interface that allows more usecases
     and makes the existing ones less buggy, especially in rarer
     architectures

   - add rwsem implementation comments

   - bump up lockdep limits"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
  rwsem: Add comments to explain the meaning of the rwsem's count field
  lockdep: Increase static allocations
  arch: Mass conversion of smp_mb__*()
  arch,doc: Convert smp_mb__*()
  arch,xtensa: Convert smp_mb__*()
  arch,x86: Convert smp_mb__*()
  arch,tile: Convert smp_mb__*()
  arch,sparc: Convert smp_mb__*()
  arch,sh: Convert smp_mb__*()
  arch,score: Convert smp_mb__*()
  arch,s390: Convert smp_mb__*()
  arch,powerpc: Convert smp_mb__*()
  arch,parisc: Convert smp_mb__*()
  arch,openrisc: Convert smp_mb__*()
  arch,mn10300: Convert smp_mb__*()
  arch,mips: Convert smp_mb__*()
  arch,metag: Convert smp_mb__*()
  arch,m68k: Convert smp_mb__*()
  arch,m32r: Convert smp_mb__*()
  arch,ia64: Convert smp_mb__*()
  ...
2014-06-03 12:57:53 -07:00
Ben Hutchings
fe62d00137 ethtool: Replace ethtool_ops::{get,set}_rxfh_indir() with {get,set}_rxfh()
ETHTOOL_{G,S}RXFHINDIR and ETHTOOL_{G,S}RSSH should work for drivers
regardless of whether they expose the hash key, unless you try to
set a hash key for a driver that doesn't expose it.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-06-03 02:42:44 +01:00
Wilfried Klaebe
7ad24ea4bf net: get rid of SET_ETHTOOL_OPS
net: get rid of SET_ETHTOOL_OPS

Dave Miller mentioned he'd like to see SET_ETHTOOL_OPS gone.
This does that.

Mostly done via coccinelle script:
@@
struct ethtool_ops *ops;
struct net_device *dev;
@@
-       SET_ETHTOOL_OPS(dev, ops);
+       dev->ethtool_ops = ops;

Compile tested only, but I'd seriously wonder if this broke anything.

Suggested-by: Dave Miller <davem@davemloft.net>
Signed-off-by: Wilfried Klaebe <w-lkml@lebenslange-mailadresse.de>
Acked-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-13 17:43:20 -04:00
dingtianhong
f06c7f9f92 vlan: rename __vlan_find_dev_deep() to __vlan_find_dev_deep_rcu()
The __vlan_find_dev_deep should always called in RCU, according
David's suggestion, rename to __vlan_find_dev_deep_rcu looks more
reasonable.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-12 14:39:13 -04:00
David S. Miller
5f013c9bc7 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	drivers/net/ethernet/altera/altera_sgdma.c
	net/netlink/af_netlink.c
	net/sched/cls_api.c
	net/sched/sch_api.c

The netlink conflict dealt with moving to netlink_capable() and
netlink_ns_capable() in the 'net' tree vs. supporting 'tc' operations
in non-init namespaces.  These were simple transformations from
netlink_capable to netlink_ns_capable.

The Altera driver conflict was simply code removal overlapping some
void pointer cast cleanups in net-next.

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-12 13:19:14 -04:00
Hariprasad Shenai
c3136f5540 cxgb4vf: Check if rx checksum offload is enabled, while reading hardware calculated checksum
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-12 12:47:46 -04:00
Hariprasad Shenai
cca2822d37 cxgb4: Check if rx checksum offload is enabled, while reading hardware calculated checksum
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-12 12:47:46 -04:00
Hariprasad Shenai
3e00a50935 cxgb4: Decode the firmware port and module type a bit more for ethtool
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-12 12:47:45 -04:00
Roland Dreier
d2e752db6d cxgb4: Decode PCIe Gen3 link speed
Add handling for " 8 GT/s" in print_port_info().

Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-04-30 16:12:21 -04:00
Hariprasad Shenai
535cdf3c19 cxgb4: Update Kconfig to include Chelsio T5 adapter
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-04-27 19:14:36 -04:00
Peter Zijlstra
4e857c58ef arch: Mass conversion of smp_mb__*()
Mostly scripted conversion of the smp_mb__* barriers.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-18 14:20:48 +02:00
Steve Wise
6f1d721037 cxgb4: use the correct max size for firmware flash
The wrong max fw size was being used and causing false
"too big" errors running ethtool -f.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-04-15 15:50:02 -04:00
Steve Wise
bfae232499 cxgb4: Save the correct mac addr for hw-loopback connections in the L2T
Hardware needs the local device mac address to support hw loopback for
rdma loopback connections.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-04-14 16:26:48 -04:00
Hariprasad Shenai
63ea7dce8c cxgb4vf: Adds device Id for few more Chelsio adapters
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28 16:34:40 -04:00
Hariprasad Shenai
0183aa626f cxgb4: Adds device ID for few more Chelsio Adapters
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-28 16:34:40 -04:00
Joe Perches
12f2a47945 chelsio: Remove addressof casts to same type
Using addressof then casting to the original type is pointless,
so remove these unnecessary casts.

Done via coccinelle script:

$ cat typecast.cocci
@@
type T;
T foo;
@@

-	(T *)&foo
+	&foo

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-26 15:46:15 -04:00
Eric W. Biederman
42ffda5fe7 cxfb4vf: Call dev_kfree/consume_skb_any instead of [dev_]kfree_skb.
Replace kfree_skb with dev_consume_skb_any in free_tx_desc that can be
called in hard irq and other contexts. dev_consume_skb_any is used
as this function consumes successfully transmitted skbs.

Replace dev_kfree_skb with dev_kfree_skb_any in t4vf_eth_xmit that can
be called in hard irq and other contexts, on paths that drop the skb.

Replace dev_kfree_skb with dev_consume_skb_any in t4vf_eth_xmit that can
be called in hard irq and other contexts, on paths that successfully
transmit the skb.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2014-03-24 21:18:58 -07:00
Eric W. Biederman
a7525198a8 cxgb4: Call dev_kfree/consume_skb_any instead of [dev_]kfree_skb.
Replace kfree_skb with dev_consume_skb_any in free_tx_desc that can be
called in hard irq and other contexts. dev_consume_skb_any is used
as this function consumes successfully transmitted skbs.

Replace dev_kfree_skb with dev_kfree_skb_any in t4_eth_xmit that can
be called in hard irq and other contexts, on paths that drop the skb.

Replace dev_kfree_skb with dev_consume_skb_any in t4_eth_xmit that can
be called in hard irq and other contexts, on paths that successfully
transmit the skb.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2014-03-24 21:18:57 -07:00
Eric W. Biederman
f9ec8131bb cxgb3: Call dev_kfree/consume_skb_any instead of [dev_]kfree_skb.
Replace kfree_skb with dev_consume_skb_any in free_tx_desc, and
write_tx_pkt_wr that can be called in hard irq and other contexts.

Replace dev_kfree_skb with dev_kfree_skb_any in t3_eth_xmit that can
be called in hard irq and other contexts.

dev_kfree_skb is replaced with dev_kfree_skb_any in t3_eth_xmit as
that location is a packet drop, while kfree_skb in free_tx_desc,
and in write_tx_pkt_wr are places where packets are consumed
in a healthy manner.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2014-03-24 21:18:57 -07:00
Steve Wise
05eb23893c cxgb4/iw_cxgb4: Doorbell Drop Avoidance Bug Fixes
The current logic suffers from a slow response time to disable user DB
usage, and also fails to avoid DB FIFO drops under heavy load. This commit
fixes these deficiencies and makes the avoidance logic more optimal.
This is done by more efficiently notifying the ULDs of potential DB
problems, and implements a smoother flow control algorithm in iw_cxgb4,
which is the ULD that puts the most load on the DB fifo.

Design:

cxgb4:

Direct ULD callback from the DB FULL/DROP interrupt handler.  This allows
the ULD to stop doing user DB writes as quickly as possible.

While user DB usage is disabled, the LLD will accumulate DB write events
for its queues.  Then once DB usage is reenabled, a single DB write is
done for each queue with its accumulated write count.  This reduces the
load put on the DB fifo when reenabling.

iw_cxgb4:

Instead of marking each qp to indicate DB writes are disabled, we create
a device-global status page that each user process maps.  This allows
iw_cxgb4 to only set this single bit to disable all DB writes for all
user QPs vs traversing the idr of all the active QPs.  If the libcxgb4
doesn't support this, then we fall back to the old approach of marking
each QP.  Thus we allow the new driver to work with an older libcxgb4.

When the LLD upcalls iw_cxgb4 indicating DB FULL, we disable all DB writes
via the status page and transition the DB state to STOPPED.  As user
processes see that DB writes are disabled, they call into iw_cxgb4
to submit their DB write events.  Since the DB state is in STOPPED,
the QP trying to write gets enqueued on a new DB "flow control" list.
As subsequent DB writes are submitted for this flow controlled QP, the
amount of writes are accumulated for each QP on the flow control list.
So all the user QPs that are actively ringing the DB get put on this
list and the number of writes they request are accumulated.

When the LLD upcalls iw_cxgb4 indicating DB EMPTY, which is in a workq
context, we change the DB state to FLOW_CONTROL, and begin resuming all
the QPs that are on the flow control list.  This logic runs on until
the flow control list is empty or we exit FLOW_CONTROL mode (due to
a DB DROP upcall, for example).  QPs are removed from this list, and
their accumulated DB write counts written to the DB FIFO.  Sets of QPs,
called chunks in the code, are removed at one time. The chunk size is 64.
So 64 QPs are resumed at a time, and before the next chunk is resumed, the
logic waits (blocks) for the DB FIFO to drain.  This prevents resuming to
quickly and overflowing the FIFO.  Once the flow control list is empty,
the db state transitions back to NORMAL and user QPs are again allowed
to write directly to the user DB register.

The algorithm is designed such that if the DB write load is high enough,
then all the DB writes get submitted by the kernel using this flow
controlled approach to avoid DB drops.  As the load lightens though, we
resume to normal DB writes directly by user applications.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-14 22:44:11 -04:00
Steve Wise
7a2cea2aaa cxgb4/iw_cxgb4: Treat CPL_ERR_KEEPALV_NEG_ADVICE as negative advice
Based on original work by Anand Priyadarshee <anandp@chelsio.com>.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-14 22:44:11 -04:00
Kumar Sanghvi
ca71de6ba7 cxgb4: Calculate len properly for LSO path
Commit 0034b29 ("cxgb4: Don't assume LSO only uses SGL path in t4_eth_xmit()")
introduced a regression where-in length was calculated wrongly for LSO path,
causing chip hangs.
So, correct the calculation of len.

Fixes: 0034b29 ("cxgb4: Don't assume LSO only uses SGL path in t4_eth_xmit()")
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-13 14:36:05 -04:00
Kumar Sanghvi
c2b955e006 cxgb4: Updates for T5 SGE's Egress Congestion Threshold
Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-13 14:36:05 -04:00
Kumar Sanghvi
0f4d201f74 cxgb4: Rectify emitting messages about SGE Ingress DMA channels being potentially stuck
Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-13 14:36:05 -04:00
Kumar Sanghvi
68bce1922f cxgb4: Add code to dump SGE registers when hitting idma hangs
Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-13 14:36:05 -04:00
Kumar Sanghvi
92ddcc7b8f cxgb4: Fix some small bugs in t4_sge_init_soft() when our Page Size is 64KB
We'd come in with SGE_FL_BUFFER_SIZE[0] and [1] both equal to 64KB and the
extant logic would flag that as an error.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-13 14:36:05 -04:00