We want to avoid forward-declarations of function if possible.
Rearrange the igc_irq_disable function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_irq_enable function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_configure_msix function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_set_rx_mode function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_set_interrupt_capability function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_alloc_mapped_page function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_configure function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_set_default_mac_filter function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_power_down_link function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We want to avoid forward-declarations of function if possible.
Rearrange the igc_clean_tx_ring function implementation.
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeff Kirsher says:
====================
100GbE Intel Wired LAN Driver Updates 2020-01-03
This series contains updates to the ice driver only.
Brett adds support for UDP segmentation offload (USO) based on the work
Alex Duyck did for other Intel drivers. Refactored how the VF sets
spoof checking to resolve a couple of issues found in
ice_set_vf_spoofchk(). Adds the ability to track of the dflt_vsI
(default VSI), since we cannot have more than one default VSI. Add a
macro for commonly used "for loop" used repeatedly in the code. Cleaned
up and made the VF link flows all similar. Refactor the flows of adding
and deleting MAC addresses in order to simplify the logic for error
conditions and setting/clearing the VF's default MAC address field.
Michal moves the setting of the default ITR value from ice_cfg_itr() to
the function we allocate queue vectors. Adds support for saving and
restoring the ITR value for each queue. Adds a check for all invalid
or unused parameters to log the information and return an error.
Vignesh cleans up the driver where we were trying to write to read-only
registers for the receive flex descriptors.
Tony changes a netdev_info() to netdev_dbg() when the MTU value is
changed.
Bruce suppresses a coverity reported error that was not really an error
by adding a code comment.
Mitch adds a check for a NULL receive descriptor to resolve a coverity
reported issue.
Krzysztof prevents a potential general protection fault by adding a
boundary check to see if the queue id is greater than the size of a UMEM
array. Adds additional code comments to assist coverity in its scans to
prevent false positives.
Jake adds support for E822 devices to the driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for E822 devices
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Coverity reports some of the calls to xdp_rxq_info_reg() as potential
issues, because the driver does not check its return value. However,
those calls are wrapped with "if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))"
and this check alone is enough to be sure that the function will never
fail.
All possible states of xdp_rxq_info are:
- NEW,
- REGISTERED,
- UNREGISTERED,
- UNUSED.
The driver won't mark a queue as UNUSED under no circumstance, so the
return value can be ignored safely.
Add comments for Coverity right above calls to xdp_rxq_info_reg() to
suppress the warnings.
Signed-off-by: Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In ice_xsk_umem(), variable qid which is later used as an array index,
is not validated for a possible boundary exceedance. Because of that,
a calling function might receive an invalid address, which causes
general protection fault when dereferenced.
To address this, add a boundary check to see if qid is greater than the
size of a UMEM array. Also, don't let user change vsi->num_xsk_umems
just by trying to setup a second UMEM if its value is already set up
(i.e. UMEM region has already been allocated for this VSI).
While at it, make sure that ring->zca.free pointer is always zeroed out
if there is no UMEM on a specified ring.
Signed-off-by: Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the case where the hardware gives us a null Rx descriptor, it is
theoretically possible that we could call one of our skb-construction
functions with no data pointer, which would cause a panic.
In real life, this will never happen - we only get null RX
descriptors as the final descriptor in a chain of otherwise-valid
descriptors. When this happens, the skb will be extant and we'll just
call ice_add_rx_frag(), which can deal with empty data buffers.
Unfortunately, Coverity does not have intimate knowledge of our
hardware, so we must add a check here.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Coverity reports an error that is not really an error; suppress it.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Following the changes of commit 12299132b3 ("net: ethernet: intel: Demote
MTU change prints to debug"), change the MTU change message to netdev_dbg()
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently when there are SR-IOV VF(s) and the user does "ip link show <pf
interface>" the VF unicast MAC addresses all show 00:00:00:00:00:00
if the unicast MAC was set via VIRTCHNL (i.e. not administratively set
by the host PF).
This is misleading to the host administrator. Fix this by setting the
VF's dflt_lan_addr.addr when the VF's unicast MAC address is
configured via VIRTCHNL. There are a couple cases where we don't allow
the dflt_lan_addr.addr field to be written. First, If the VF's
pf_set_mac field is true and the VF is not trusted, then we don't allow
the dflt_lan_addr.addr to be modified. Second, if the
dflt_lan_addr.addr has already been set (i.e. via VIRTCHNL).
Also a small refactor was done to separate the flow for add and delete
MAC addresses in order to simplify the logic for error conditions
and set/clear the VF's dflt_lan_addr.addr field.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently the flow for ice_set_vf_link_state() is not configuring link
the same as all other VF link configuration flows. Fix this by only
setting the necessary VF members in ice_set_vf_link_state() and then
call ice_vc_notify_link_state() to actually configure link for the
VF. This made ice_set_pfe_link_forced() unnecessary, so it was
deleted. Also, this commonizes the link flows for the VF to all call
ice_vc_notify_link_state().
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Remove Rx flex descriptor metadata and flag programming; per specification
these registers cannot be written to as they are read only.
Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Check for all unused parameters, if ethtool sent one of them,
print info about that and return error.
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
After each rebuild driver deallocates q_vectors, so the interrupt
throttle rate (ITR) settings get lost.
Create a function to save and restore ITR for each queue. If a user
increases the number of queues, restore all the previous queue
settings for each existing queue, and the additional queues will
get the default setting.
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When the user sets itr_setting to zero from ethtool -C, the driver changes
this value to default in ice_cfg_itr (for example after changing ring
param). Remove code that sets default value in ice_cfg_itr and move it to
place where the driver allocates q_vectors.
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently we do "for (i = 0; i < pf->num_alloc_vfs; i++)" all over the
place. Many other places use macros to contain this repeated for loop,
So create the macro ice_for_each_vf(pf, i) that does the same thing.
There were a couple places we were using one loop variable and a VF
iterator, which were changed to using a local variable within the
ice_for_each_vf() macro.
Also in ice_alloc_vfs() we were setting pf->num_alloc_vfs after doing
"for (i = 0; i < num_alloc_vfs; i++)". Instead assign pf->num_alloc_vfs
right after allocating memory for the pf->vf array.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We can't have more than one default VSI so prevent another VSI from
overwriting the current dflt_vsi. This was achieved by adding the
following functions:
ice_is_dflt_vsi_in_use()
- Used to check if the default VSI is already being used.
ice_is_vsi_dflt_vsi()
- Used to check if VSI passed in is in fact the default VSI.
ice_set_dflt_vsi()
- Used to set the default VSI via a switch rule
ice_clear_dflt_vsi()
- Used to clear the default VSI via a switch rule.
Also, there was no need to introduce any locking because all mailbox
events and synchronization of switch filters for the PF happen in the
service task.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There are many things wrong with the function
ice_set_vf_spoofchk().
1. The VSI being modified is the PF VSI, not the VF VSI.
2. We are enabling Rx VLAN pruning instead of Tx VLAN anti-spoof.
3. The spoofchk setting for each VF is not initialized correctly
or re-initialized correctly on reset.
To fix [1] we need to make sure we are modifying the VF VSI.
This is done by using the vf->lan_vsi_idx to index into the PF's
VSI array.
To fix [2] replace setting Rx VLAN pruning in ice_set_vf_spoofchk()
with setting Tx VLAN anti-spoof.
To Fix [3] we need to make sure the initial VSI settings match what
is done in ice_set_vf_spoofchk() for spoofchk=on. Also make sure
this also works for VF reset. This was done by modifying ice_vsi_init()
to account for the current spoofchk state of the VF VSI.
Because of these changes, Tx VLAN anti-spoof needs to be removed
from ice_cfg_vlan_pruning(). This is okay for the VF because this
is now controlled from the admin enabling/disabling spoofchk. For the
PF, Tx VLAN anti-spoof should not be set. This change requires us to
call ice_set_vf_spoofchk() when configuring promiscuous mode for
the VF which requires ice_set_vf_spoofchk() to move in order to prevent
a forward declaration prototype.
Also, add VLAN 0 by default when allocating a VF since the PF is unaware
if the guest OS is running the 8021q module. Without this, MDD events will
trigger on untagged traffic because spoofcheck is enabled by default. Due
to this change, ignore add/delete messages for VLAN 0 from VIRTCHNL since
this is added/deleted during VF initialization/teardown respectively and
should not be modified.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on the work done by Alex Duyck on other Intel drivers, add code to
support UDP segmentation offload (USO) for the ice driver.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
drivers/net/ethernet/brocade/bna/bfa_ioc.c: In function
‘bfa_ioc_fwver_clear’:
drivers/net/ethernet/brocade/bna/bfa_ioc.c:1127:13: warning: variable
‘pgoff’ set but not used [-Wunused-but-set-variable]
It is never used, and so can be removed.
Signed-off-by: yu kuai <yukuai3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The current driver only exists on a non NUMA aware machine.
With 44768decb7 ("page_pool: handle page recycle for NUMA_NO_NODE condition")
applied we can safely change that to NUMA_NO_NODE and accommodate future
NUMA aware hardware using netsec network interface
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Passing NULL to l2tp_pernet causes a crash via BUG_ON.
Dereferencing net in net_generic() also has the same effect.
This patch removes the redundant BUG_ON check on the same parameter.
Signed-off-by: Xu Wang <vulab@iscas.ac.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Passing NULL to phonet_pernet causes a crash via BUG_ON.
Dereferencing net in net_generic() also has the same effect.
This patch removes the redundant BUG_ON check on the same parameter.
Signed-off-by: Xu Wang <vulab@iscas.ac.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
The argument is always ignored, so remove it.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes gcc '-Wunused-but-set-variable' warning:
net/ethtool/linkmodes.c: In function 'ethnl_set_linkmodes':
net/ethtool/linkmodes.c:326:32: warning:
variable 'lsettings' set but not used [-Wunused-but-set-variable]
struct ethtool_link_settings *lsettings;
^
It is never used, so remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Michal Kubecek <mkubecek@suse.cz>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
REXMIT_NEW is a macro for "FRTO-style
transmit of unsent/new packets", this patch
makes it more readable.
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ENETC implement time specific departure capability, which enables
the user to specify when a frame can be transmitted. When this
capability is enabled, the device will delay the transmission of
the frame so that it can be transmitted at the precisely specified time.
The delay departure time up to 0.5 seconds in the future. If the
departure time in the transmit BD has not yet been reached, based
on the current time, the packet will not be transmitted.
This driver was loaded by Qos driver ETF. User could load it by tc
commands. Here are the example commands:
tc qdisc add dev eth0 root handle 1: mqprio \
num_tc 8 map 0 1 2 3 4 5 6 7 hw 1
tc qdisc replace dev eth0 parent 1:8 etf \
clockid CLOCK_TAI delta 30000 offload
These example try to set queue mapping first and then set queue 7
with 30us ahead dequeue time.
Then user send test frame should set SO_TXTIME feature for socket.
There are also some limitations for this feature in hardware:
- Transmit checksum offloads and time specific departure operation
are mutually exclusive.
- Time Aware Shaper feature (Qbv) offload and time specific departure
operation are mutually exclusive.
Signed-off-by: Po Liu <Po.Liu@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use resource_size rather than a verbose computation on
the end and start fields.
The semantic patch that makes these changes is as follows:
(http://coccinelle.lip6.fr/)
<smpl>
@@ struct resource ptr; @@
- (ptr.end + 1 - ptr.start)
+ resource_size(&ptr)
@@ struct resource *ptr; @@
- (ptr->end + 1 - ptr->start)
+ resource_size(ptr)
</smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
The idtcm_caps structure is only copied into another structure,
so make it const.
The opportunity for this change was found using Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Only the SFC4000 code, now moved to sfc-falcon, needed I2C.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern says:
====================
tcp: Add support for L3 domains to MD5 auth
With VRF, the scope of network addresses is limited to the L3 domain
the device is associated. MD5 keys are based on addresses, so proper
VRF support requires an L3 domain to be considered for the lookups.
Leverage the new TCP_MD5SIG_EXT option to add support for a device index
to MD5 keys. The __tcpm_pad entry in tcp_md5sig is renamed to tcpm_ifindex
and a new flag, TCP_MD5SIG_FLAG_IFINDEX, in tcpm_flags determines if the
entry is examined. This follows what was done for MD5 and prefixes with
commits
8917a777be ("tcp: md5: add TCP_MD5SIG_EXT socket option to set a key address prefix")
6797318e62 ("tcp: md5: add an address prefix for key lookup")
Handling both a device AND L3 domain is much more complicated for the
response paths. This set focuses only on L3 support - requiring the
device index to be an l3mdev (ie, VRF). Support for slave devices can
be added later if desired, much like the progression of support for
sockets bound to a VRF and then bound to a device in a VRF. Kernel
code is setup to explicitly call out that current lookup is for an L3
index, while the uapi just references a device index allowing its
meaning to include other devices in the future.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add tests for new TCP MD5 API for L3 domains (VRF).
A new namespace is added to create a duplicate configuration between
the VRF and default VRF to verify overlapping config is handled properly.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add tests for existing TCP MD5 APIs - both single address
config and the new extended API for prefixes.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update nettest to implement TCP_MD5SIG_EXT for a prefix and a device.
Add a new option, -m, to specify a prefix and length to use with MD5
auth. The device option comes from the existing -d option. If either
are set and MD5 auth is requested, TCP_MD5SIG_EXT is used instead of
TCP_MD5SIG.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On failure to set MD5 password, do_server should return 1 so that the
program exits with 1 rather than 255. This used for negative testing
when adding MD5 with device option.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for userspace to specify a device index to limit the scope
of an entry via the TCP_MD5SIG_EXT setsockopt. The existing __tcpm_pad
is renamed to tcpm_ifindex and the new field is only checked if the new
TCP_MD5SIG_FLAG_IFINDEX is set in tcpm_flags. For now, the device index
must point to an L3 master device (e.g., VRF). The API and error
handling are setup to allow the constraint to be relaxed in the future
to any device index.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add l3index to tcp_md5sig_key to represent the L3 domain of a key, and
add l3index to tcp_md5_do_add and tcp_md5_do_del to fill in the key.
With the key now based on an l3index, add the new parameter to the
lookup functions and consider the l3index when looking for a match.
The l3index comes from the skb when processing ingress packets leveraging
the helpers created for socket lookups, tcp_v4_sdif and inet_iif (and the
v6 variants). When the sdif index is set it means the packet ingressed a
device that is part of an L3 domain and inet_iif points to the VRF device.
For egress, the L3 domain is determined from the socket binding and
sk_bound_dev_if.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The original ingress device index is saved to the cb space of the skb
and the cb is moved during tcp processing. Since tcp_v4_inbound_md5_hash
can be called before and after the cb move, pass dif and sdif to it so
the caller can save both prior to the cb move. Both are used by a later
patch.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The original ingress device index is saved to the cb space of the skb
and the cb is moved during tcp processing. Since tcp_v6_inbound_md5_hash
can be called before and after the cb move, pass dif and sdif to it so
the caller can save both prior to the cb move. Both are used by a later
patch.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Extract the typecast to (union tcp_md5_addr *) to a local variable
rather than the current long, inline declaration with function calls.
No functional change intended.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel says:
====================
mlxsw: Allow setting default port priority
Petr says:
When LLDP APP TLV selector 1 (EtherType) is used with PID of 0, the
corresponding entry specifies "default application priority [...] when
application priority is not otherwise specified."
mlxsw currently supports this type of APP entry, but uses it only as a
fallback for unspecified DSCP rules. However non-IP traffic is prioritized
according to port-default priority, not according to the DSCP-to-prio
tables, and thus it's currently not possible to prioritize such traffic
correctly.
This patchset extends the use of the abovementioned APP entry to also set
default port priority (in patches #1 and #2) and then (in patch #3) adds a
selftest.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>