Data structures that are used both with and without RCU protection
are difficult to write in a sparse-clean manner. If you mark the
relevant pointers with __rcu, sparse will complain about all non-RCU
uses, but if you don't mark those pointers, sparse will complain about
all RCU uses.
This commit therefore suppresses sparse warnings for rcu_dereference_raw(),
allowing mixed-protection data structures to avoid these warnings.
Reported-by: David Howells <dhowells@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Implement an RCU-safe variant of rb_replace_node() and rearrange
rb_replace_node() to do things in the same order.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Overhaul the usage count accounting for the rxrpc_connection struct to make
it easier to implement RCU access from the data_ready handler.
The problem is that currently we're using a lock to prevent the garbage
collector from trying to clean up a connection that we're contemplating
unidling. We could just stick incoming packets on the connection we find,
but we've then got a problem that we may race when dispatching a work item
to process it as we need to give that a ref to prevent the rxrpc_connection
struct from disappearing in the meantime.
Further, incoming packets may get discarded if attached to an
rxrpc_connection struct that is going away. Whilst this is not a total
disaster - the client will presumably resend - it would delay processing of
the call. This would affect the AFS client filesystem's service manager
operation.
To this end:
(1) We now maintain an extra count on the connection usage count whilst it
is on the connection list. This mean it is not in use when its
refcount is 1.
(2) When trying to reuse an old connection, we only increment the refcount
if it is greater than 0. If it is 0, we replace it in the tree with a
new candidate connection.
(3) Two connection flags are added to indicate whether or not a connection
is in the local's client connection tree (used by sendmsg) or the
peer's service connection tree (used by data_ready). This makes sure
that we don't try and remove a connection if it got replaced.
The flags are tested under lock with the removal operation to prevent
the reaper from killing the rxrpc_connection struct whilst someone
else is trying to effect a replacement.
This could probably be alleviated by using memory barriers between the
flag set/test and the rb_tree ops. The rb_tree op would still need to
be under the lock, however.
(4) When trying to reap an old connection, we try to flip the usage count
from 1 to 0. If it's not 1 at that point, then it must've come back
to life temporarily and we ignore it.
Signed-off-by: David Howells <dhowells@redhat.com>
Move the lookup of a peer from a call that's being accepted into the
function that creates a new incoming connection. This will allow us to
avoid incrementing the peer's usage count in some cases in future.
Note that I haven't bother to integrate rxrpc_get_addr_from_skb() with
rxrpc_extract_addr_from_skb() as I'm going to delete the former in the very
near future.
Signed-off-by: David Howells <dhowells@redhat.com>
Split the service-specific connection code out into into its own file. The
client-specific code has already been split out. This will leave just the
common code in the original file.
Signed-off-by: David Howells <dhowells@redhat.com>
Split the client-specific connection code out into its own file. It will
behave somewhat differently from the service-specific connection code, so
it makes sense to separate them.
Signed-off-by: David Howells <dhowells@redhat.com>
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
The socket's accept queue (socket->acceptq) should be accessed under
socket->call_lock, not under the connection lock.
Signed-off-by: David Howells <dhowells@redhat.com>
Add RCU destruction for connections and calls as the RCU lookup from the
transport socket data_ready handler is going to come along shortly.
Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage
into the destruction code for the objects that need it (locals and
connections) and add the extra RCU barrier required for connection cleanup.
Signed-off-by: David Howells <dhowells@redhat.com>
When a call is disconnected, clear the call's pointer to the connection and
release the associated ref on that connection. This means that the call no
longer pins the connection and the connection can be discarded even before
the call is.
As the code currently stands, the call struct is effectively pinned by
userspace until userspace has enacted a recvmsg() to retrieve the final
call state as sk_buffs on the receive queue pin the call to which they're
related because:
(1) The rxrpc_call struct contains the userspace ID that recvmsg() has to
include in the control message buffer to indicate which call is being
referred to. This ID must remain valid until the terminal packet is
completely read and must be invalidated immediately at that point as
userspace is entitled to immediately reuse it.
(2) The final ACK to the reply to a client call isn't sent until the last
data packet is entirely read (it's probably worth altering this in
future to be send the ACK as soon as all the data has been received).
This change requires a bit of rearrangement to make sure that the call
isn't going to try and access the connection again after protocol
completion:
(1) Delete the error link earlier when we're releasing the call. Possibly
network errors should be distributed via connections at the cost of
adding in an access to the rxrpc_connection struct.
(2) Remove the call from the connection's call tree before disconnecting
the call. The call tree needs to be removed anyway and incoming
packets delivered by channel pointer instead.
(3) The release call event should be considered last after all other
events have been processed so that we don't need access to the
connection again.
(4) Move the channel_lock taking from rxrpc_release_call() to
rxrpc_disconnect_call() where it will be required in future.
Signed-off-by: David Howells <dhowells@redhat.com>
If rxrpc_connect_call() fails during the creation of a client connection,
there are two bugs that we can hit that need fixing:
(1) The call state should be moved to RXRPC_CALL_DEAD before the call
cleanup phase is invoked. If not, this can cause an assertion failure
later.
(2) call->link should be reinitialised after being deleted in
rxrpc_new_client_call() - which otherwise leads to a failure later
when the call cleanup attempts to delete the link again.
Signed-off-by: David Howells <dhowells@redhat.com>
Rather than calling rxrpc_get_connection() manually before calling
rxrpc_queue_conn(), do it inside the queue wrapper.
This allows us to do some important fixes:
(1) If the usage count is 0, do nothing. This prevents connections from
being reanimated once they're dead.
(2) If rxrpc_queue_work() fails because the work item is already queued,
retract the usage count increment which would otherwise be lost.
(3) Don't take a ref on the connection in the work function. By passing
the ref through the work item, this is unnecessary. Doing it in the
work function is too late anyway. Previously, connection-directed
packets held a ref on the connection, but that's not really the best
idea.
And another useful changes:
(*) Don't need to take a refcount on the connection in the data_ready
handler unless we invoke the connection's work item. We're using RCU
there so that's otherwise redundant.
Signed-off-by: David Howells <dhowells@redhat.com>
Check that the client conns cache is empty before module removal and bug if
not, listing any offending connections that are still present. Unfortunately,
if there are connections still around, then the transport socket is still
unexpectedly open and active, so we can't just unallocate the connections.
Signed-off-by: David Howells <dhowells@redhat.com>
Turn the connection event and state #define lists into enums and move
outside of the struct definition.
Whilst we're at it, change _SERVER to _SERVICE in those identifiers and add
EV_ into the event name to distinguish them from flags and states.
Also add a symbol indicating the number of states and use that in the state
text array.
Signed-off-by: David Howells <dhowells@redhat.com>
Provide queueing helper functions so that the queueing of local and
connection objects can be fixed later.
The issue is that a ref on the object needs to be passed to the work queue,
but the act of queueing the object may fail because the object is already
queued. Testing the queuedness of an object before hand doesn't work
because there can be a race with someone else trying to queue it. What
will have to be done is to adjust the refcount depending on the result of
the queue operation.
Signed-off-by: David Howells <dhowells@redhat.com>
rxkad uses stack memory in SG lists which would not work if stacks were
allocated from vmalloc memory. In fact, in most cases this isn't even
necessary as the stack memory ends up getting copied over to kmalloc
memory.
This patch eliminates all the unnecessary stack memory uses by supplying
the final destination directly to the crypto API. In two instances where a
temporary buffer is actually needed we also switch use a scratch area in
the rxrpc_call struct (only one DATA packet will be being secured or
verified at a time).
Finally there is no need to split a split-page buffer into two SG entries
so code dealing with that has been removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: David Howells <dhowells@redhat.com>
When looking up a client connection to which to route a packet, we need to
check that the packet came from the correct source so that a peer can't try
to muck around with another peer's connection.
Signed-off-by: David Howells <dhowells@redhat.com>
When a jumbo packet is being split up and processed, the crypto checksum
for each split-out packet is in the jumbo header and needs placing in the
reconstructed packet header.
When the code was changed to keep the stored copy of the packet header in
host byte order, this reconstruction was missed.
Found with sparse with CF=-D__CHECK_ENDIAN__:
../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
../net/rxrpc/input.c:479:33: expected unsigned short [unsigned] [usertype] _rsvd
../net/rxrpc/input.c:479:33: got restricted __be16 [addressable] [usertype] _rsvd
Fixes: 0d12f8a402 ("rxrpc: Keep the skb private record of the Rx header in host byte order")
Signed-off-by: David Howells <dhowells@redhat.com>
Russell King says:
====================
Initial SFP support patches
Please review and merge this initial patch set, which is part of a
larger set previously posted adding SFP support to phy and mvneta.
This initial set are focused on cleaning up and reorganising the
fixed-phy code to allow the core software-phy code to be re-used.
These are based on net-next.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no prevention of a concurrent call to both fixed_mdio_read()
and fixed_phy_update_state(), which can result in the state being
modified while it's being inspected. Fix this by using a seqcount
to detect modifications, and memcpy()ing the state.
We remain slightly naughty here, calling link_update() and updating
the link status within the read-side loop - which would need rework
of the design to change.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Generate software phy registers as and when requested, rather than
duplicating the state in fixed_phy. This allows us to eliminate
the duplicate storage of of the same data, which is only different
in format.
As fixed_phy_update_regs() no longer updates register state, rename
it to fixed_phy_update().
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Separate out the generation of MII registers from the state validation.
This allows us to simplify the error handing in fixed_phy() by allowing
earlier error detection.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert the swphy register generation to tabular form which allows us
to eliminate multiple switch() statements. This results in a smaller
object code size, more efficient, and easier to add support for faster
speeds.
Before:
Idx Name Size VMA LMA File off Algn
0 .text 00000164 00000000 00000000 00000034 2**2
text data bss dec hex filename
388 0 0 388 184 swphy.o
After:
Idx Name Size VMA LMA File off Algn
0 .text 000000fc 00000000 00000000 00000034 2**2
5 .rodata 00000028 00000000 00000000 00000138 2**2
text data bss dec hex filename
324 0 0 324 144 swphy.o
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move the fixed_phy MII register generation to a library to allow other
software phy implementations to use this code.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
iQEcBAABCgAGBQJXa6rGAAoJED07qiWsqSVqzwsH/0g/blikQuNNAB7LzXGTEMn8
ipWCODaYxgWE/V/1A0JjsKpxB1ZL3HE0/xg/OmZ10SRNlz5LJTLkoYZa7xYmyxtT
a89mT9vdnWxwfwbeIhSSjG9W9g44vqrhMIPxffIu5yqSqdIJak0HIiuQizJ0/xjx
xQVx4AykWKQ3u0/Tiyz3ez5yhyMMNEmHxKyDWpgrR6+zXlofwP/Em3NFPwk9gh32
ECAagOUOEvdjXeRs1Yn/CV0FC7Wgs4Hzr048JJ5wOteawBLr+sPOnuWZtDhNrwDK
Eceh7aEeAiFfIW+jCMEe5qrpFHi8hqyqBo4L9TDYsE7T/Z6BPyh+5OUh8+5dtaw=
=TZew
-----END PGP SIGNATURE-----
Merge tag 'linux-can-next-for-4.8-20160623' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2016-06-17
this is a pull request of 4 patches for net-next/master.
Arnd Bergmann's patch fixes a regresseion in af_can introduced in
linux-can-next-for-4.8-20160617. There are two patches by Ramesh
Shanmugasundaram, which add CAN-2.0 support to the rcar_canfd driver.
And a patch by Ed Spiridonov that adds better error diagnoses messages
to the Ed Spiridonov driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace calls to kmalloc followed by a memcpy with a direct call to
kmemdup.
The Coccinelle semantic patch used to make this change is as follows:
@@
expression from,to,size,flag;
statement S;
@@
- to = \(kmalloc\|kzalloc\)(size,flag);
+ to = kmemdup(from,size,flag);
if (to==NULL || ...) S
- memcpy(to, from, size);
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
trivial fixes to spelling mistakes of the words "excessive collisions"
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
trivial fixes to spelling mistakes of the word "descriptors"
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed says:
====================
Mellanox 100G mlx5e Ethernet extensions
This series includes multiple features extensions for mlx5 Ethernet netdevice driver.
Namely, TX Rate limiting, RX interrupt moderation, ethtool settings.
TX Rate limiting:
- ConnectX-4 rate limiting infrastructure
- Set max rate NDO support
RX interrupt moderation:
- CQE based coalescing option (controlled via priv flags)
- Adaptive RX coalescing
ethtool settings:
- priv flags callbacks
- Support new ksettings API
- Add 50G missing link mode
- Support auto negotiation on/off
Applied on top: 0e9390ebf1 ("Merge branch 'mlxsw-next'")
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Previous to this patch auto negotiation was reported off although it was
on by default in hardware. This patch reports the correct information to
ethtool and allows the user to toggle it on/off.
Added another parameter to set port proto function in order to pass
the auto negotiation field to the hardware.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use new get/set link ksettings and remove get/set settings legacy
callbacks.
This allows us to use bitmasks longer than 32 bit for supported and
advertised link modes and use modes that were previously not supported.
Signed-off-by: Gal Pressman <galp@mellanox.com>
CC: Ben Hutchings <bwh@kernel.org>
CC: David Decotigny <decot@googlers.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add MLX5E_50GBASE_SR2 as ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Cc: Ben Hutchings <bwh@kernel.org>
Cc: David Decotigny <decot@googlers.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT bit.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Cc: Ben Hutchings <bwh@kernel.org>
Cc: David Decotigny <decot@googlers.com>
Acked-By: David Decotigny <decot@googlers.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a dedicated function to toggle port link. It should be called only
after setting a port register.
Toggle will set port link to down and bring it back up in case that it's
admin status was up.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In this mode the moderation timer will restart upon
new completion (CQE) generation rather than upon interrupt
generation.
The outcome is that for bursty traffic the period timer will never
expire and thus only the moderation frames counter will dictate
interrupt generation, thus the interrupt rate will be relative
to the incoming packets size.
If the burst seizes for "moderation period" time then an interrupt
will be issued immediately.
CQE based moderation is off by default and can be controlled
via ethtool set_priv_flags.
Performance tested on ConnectX4-Lx 50G.
Less packet loss in netperf UDP and TCP tests, with no bw degradation,
for both single and multi streams, with message sizes of
64, 1024, 1472 and 32768 byte.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Gil Rockah <gilr@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce an infrastructure for getting/setting private net device
flags.
Currently a 'nop' priv flag is added, following patches will override
the flag will actual feature specific flags.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement set_maxrate ndo.
Use the rate index from the hardware table to attach to channel SQ/TXQ.
In case of failure to configure new rate, the queue remains with
unlimited rate.
We save the configuration on priv structure and apply it each time
Send Queues are being reinitialized (after open/close) operations.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Configuring and managing HW rate limit tables.
The HW holds a table of rate limits, each rate is
associated with an index in that table.
Later a Send Queue uses this index to set the rate limit.
Multiple Send Queues can have the same rate limit, which is
represented by a single entry in this table.
Even though a rate can be shared, each queue is being rate
limited independently of others.
The SW shadow of this table holds the rate itself,
the index in the HW table and the refcount (number of queues)
working with this rate.
The exported functions are mlx5_rl_add_rate and mlx5_rl_remove_rate.
Number of different rates and their values are derived
from HW capabilities.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla says:
====================
be2net: patch set
Hi Dave, pls consider commiting the following patches to the net-next tree.
Thanks!
Patch 1 replaces the be_max_eqs() macro with two new macros called
be_max_nic_eqs() and be_max_func_eqs() to clear confusion in that part
of the code.
Patch 2 adds support to configure asymmetric number of rx/tx queues via
ethtool set-channels option.
Patch 3 disables EVB when VFs are not enabled on a BE3 SR-IOV config to
avoid the broadcast echo problem.
Patch 4 updates copyright markings in be2net src files
Patch 5 updates the be2net maintainers' list
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch removes Padmanabh's name from the maintainers list as he's no
longer with the company. It also adds the driver name on the headline to
make it easy to lookup the maintainers list by the driver name.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch updates year and company name in the copyright markings in the
be2net source files.
Signed-off-by: Somnath Kotur <somnath.kotur@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On SR-IOV profiles, when the user connects a Linux Bridge or OVS to a BE3
vport, they suffer the "broadcast/multicast echo" problem. BE3 EVB echoes
broadcast and multicast packets back to PF's vport confusing the
Linux bridge. BE3 relies on the src-mac addr being programmed on the
interface to avoid sending back an echo of a broadcast or multicast packet
on a vPort. When a Linux bridge is connected to a BE3, the mac-addr of the
VM behind the bridge doesn't get configured on the vPort and so echo
cancellation doesn't work.
This patch worksaround this problem by disabling the EVB initially
and re-enabling it *only* when SR-IOV is enabled by the user. For the
driver fix to work, the BE3 FW version must be >= 11.1.84.0.
Signed-off-by: Somnath Kotur <somnath.kotur@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
be2net so far supported creation of RX/TX queues only in pairs.
On configs where rx and tx queue counts are different, creation of only
the lesser number of queues has been supported.
This patch now allows a combination of RX/TX-only channels along with
combined channels. N TX-queues and M RX-queues can be created with the
following cmds:
ethtool -L ethX combined N rx M-N (when N < M)
ethtool -L ethX combined M tx N-M (when M < N)
Setting both RX-only and TX-only channels is still not supported.
It is mandatory to create atleast one combined channel.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The EQs available on a function are shared between NIC and RoCE.
The be_max_eqs() macro was so far being used to refer to the max number of
EQs available for NIC. This has caused some confusion in the code. To fix
this confusion this patch introduces a new macro called be_max_nic_eqs()
to refer to the max number of EQs avialable for NIC only and renames
be_max_eqs() to be_max_func_eqs().
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Duan says:
====================
net: fec: add new type device
Different i.MX SOC FEC support different features like :
- i.MX6Q/DL FEC does not support AVB and interrupt coalesc
- i.MX6SX/i.MX7D supports AVB and interrupt coalesc
- i.MX6UL/ULL does not support AVB, but support interrupt coalesc
Then, add new quirk flag to judge the supported features, and add new
type device for i.MX6UL.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
i.MX6UL is a member in i.MX series family, the SOC FEC inherits from
i.MX6SX but removes some IP features, lets define a new type for fec
device.
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>