2019-05-27 13:55:01 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
net: Distributed Switch Architecture protocol support
Distributed Switch Architecture is a protocol for managing hardware
switch chips. It consists of a set of MII management registers and
commands to configure the switch, and an ethernet header format to
signal which of the ports of the switch a packet was received from
or is intended to be sent to.
The switches that this driver supports are typically embedded in
access points and routers, and a typical setup with a DSA switch
looks something like this:
+-----------+ +-----------+
| | RGMII | |
| +-------+ +------ 1000baseT MDI ("WAN")
| | | 6-port +------ 1000baseT MDI ("LAN1")
| CPU | | ethernet +------ 1000baseT MDI ("LAN2")
| |MIImgmt| switch +------ 1000baseT MDI ("LAN3")
| +-------+ w/5 PHYs +------ 1000baseT MDI ("LAN4")
| | | |
+-----------+ +-----------+
The switch driver presents each port on the switch as a separate
network interface to Linux, polls the switch to maintain software
link state of those ports, forwards MII management interface
accesses to those network interfaces (e.g. as done by ethtool) to
the switch, and exposes the switch's hardware statistics counters
via the appropriate Linux kernel interfaces.
This initial patch supports the MII management interface register
layout of the Marvell 88E6123, 88E6161 and 88E6165 switch chips, and
supports the "Ethertype DSA" packet tagging format.
(There is no officially registered ethertype for the Ethertype DSA
packet format, so we just grab a random one. The ethertype to use
is programmed into the switch, and the switch driver uses the value
of ETH_P_EDSA for this, so this define can be changed at any time in
the future if the one we chose is allocated to another protocol or
if Ethertype DSA gets its own officially registered ethertype, and
everything will continue to work.)
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Nicolas Pitre <nico@marvell.com>
Tested-by: Byron Bradley <byron.bbradley@gmail.com>
Tested-by: Tim Ellis <tim.ellis@mac.com>
Tested-by: Peter van Valderen <linux@ddcrew.com>
Tested-by: Dirk Teurlings <dirk@upexia.nl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-07 20:44:02 +07:00
|
|
|
/*
|
|
|
|
* include/net/dsa.h - Driver for Distributed Switch Architecture switch chips
|
dsa: add switch chip cascading support
The initial version of the DSA driver only supported a single switch
chip per network interface, while DSA-capable switch chips can be
interconnected to form a tree of switch chips. This patch adds support
for multiple switch chips on a network interface.
An example topology for a 16-port device with an embedded CPU is as
follows:
+-----+ +--------+ +--------+
| |eth0 10| switch |9 10| switch |
| CPU +----------+ +-------+ |
| | | chip 0 | | chip 1 |
+-----+ +---++---+ +---++---+
|| ||
|| ||
||1000baseT ||1000baseT
||ports 1-8 ||ports 9-16
This requires a couple of interdependent changes in the DSA layer:
- The dsa platform driver data needs to be extended: there is still
only one netdevice per DSA driver instance (eth0 in the example
above), but each of the switch chips in the tree needs its own
mii_bus device pointer, MII management bus address, and port name
array. (include/net/dsa.h) The existing in-tree dsa users need
some small changes to deal with this. (arch/arm)
- The DSA and Ethertype DSA tagging modules need to be extended to
use the DSA device ID field on receive and demultiplex the packet
accordingly, and fill in the DSA device ID field on transmit
according to which switch chip the packet is heading to.
(net/dsa/tag_{dsa,edsa}.c)
- The concept of "CPU port", which is the switch chip port that the
CPU is connected to (port 10 on switch chip 0 in the example), needs
to be extended with the concept of "upstream port", which is the
port on the switch chip that will bring us one hop closer to the CPU
(port 10 for both switch chips in the example above).
- The dsa platform data needs to specify which ports on which switch
chips are links to other switch chips, so that we can enable DSA
tagging mode on them. (For inter-switch links, we always use
non-EtherType DSA tagging, since it has lower overhead. The CPU
link uses dsa or edsa tagging depending on what the 'root' switch
chip supports.) This is done by specifying "dsa" for the given
port in the port array.
- The dsa platform data needs to be extended with information on via
which port to reach any given switch chip from any given switch chip.
This info is specified via the per-switch chip data struct ->rtable[]
array, which gives the nexthop ports for each of the other switches
in the tree.
For the example topology above, the dsa platform data would look
something like this:
static struct dsa_chip_data sw[2] = {
{
.mii_bus = &foo,
.sw_addr = 1,
.port_names[0] = "p1",
.port_names[1] = "p2",
.port_names[2] = "p3",
.port_names[3] = "p4",
.port_names[4] = "p5",
.port_names[5] = "p6",
.port_names[6] = "p7",
.port_names[7] = "p8",
.port_names[9] = "dsa",
.port_names[10] = "cpu",
.rtable = (s8 []){ -1, 9, },
}, {
.mii_bus = &foo,
.sw_addr = 2,
.port_names[0] = "p9",
.port_names[1] = "p10",
.port_names[2] = "p11",
.port_names[3] = "p12",
.port_names[4] = "p13",
.port_names[5] = "p14",
.port_names[6] = "p15",
.port_names[7] = "p16",
.port_names[10] = "dsa",
.rtable = (s8 []){ 10, -1, },
},
},
static struct dsa_platform_data pd = {
.netdev = &foo,
.nr_switches = 2,
.sw = sw,
};
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-20 16:52:09 +07:00
|
|
|
* Copyright (c) 2008-2009 Marvell Semiconductor
|
net: Distributed Switch Architecture protocol support
Distributed Switch Architecture is a protocol for managing hardware
switch chips. It consists of a set of MII management registers and
commands to configure the switch, and an ethernet header format to
signal which of the ports of the switch a packet was received from
or is intended to be sent to.
The switches that this driver supports are typically embedded in
access points and routers, and a typical setup with a DSA switch
looks something like this:
+-----------+ +-----------+
| | RGMII | |
| +-------+ +------ 1000baseT MDI ("WAN")
| | | 6-port +------ 1000baseT MDI ("LAN1")
| CPU | | ethernet +------ 1000baseT MDI ("LAN2")
| |MIImgmt| switch +------ 1000baseT MDI ("LAN3")
| +-------+ w/5 PHYs +------ 1000baseT MDI ("LAN4")
| | | |
+-----------+ +-----------+
The switch driver presents each port on the switch as a separate
network interface to Linux, polls the switch to maintain software
link state of those ports, forwards MII management interface
accesses to those network interfaces (e.g. as done by ethtool) to
the switch, and exposes the switch's hardware statistics counters
via the appropriate Linux kernel interfaces.
This initial patch supports the MII management interface register
layout of the Marvell 88E6123, 88E6161 and 88E6165 switch chips, and
supports the "Ethertype DSA" packet tagging format.
(There is no officially registered ethertype for the Ethertype DSA
packet format, so we just grab a random one. The ethertype to use
is programmed into the switch, and the switch driver uses the value
of ETH_P_EDSA for this, so this define can be changed at any time in
the future if the one we chose is allocated to another protocol or
if Ethertype DSA gets its own officially registered ethertype, and
everything will continue to work.)
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Nicolas Pitre <nico@marvell.com>
Tested-by: Byron Bradley <byron.bbradley@gmail.com>
Tested-by: Tim Ellis <tim.ellis@mac.com>
Tested-by: Peter van Valderen <linux@ddcrew.com>
Tested-by: Dirk Teurlings <dirk@upexia.nl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-07 20:44:02 +07:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __LINUX_NET_DSA_H
|
|
|
|
#define __LINUX_NET_DSA_H
|
|
|
|
|
2017-02-08 06:03:05 +07:00
|
|
|
#include <linux/if.h>
|
2011-12-01 05:07:18 +07:00
|
|
|
#include <linux/if_ether.h>
|
2011-11-28 00:06:08 +07:00
|
|
|
#include <linux/list.h>
|
2017-02-04 01:20:20 +07:00
|
|
|
#include <linux/notifier.h>
|
2011-11-25 21:32:52 +07:00
|
|
|
#include <linux/timer.h>
|
|
|
|
#include <linux/workqueue.h>
|
2014-08-28 07:04:49 +07:00
|
|
|
#include <linux/of.h>
|
2014-10-18 06:02:13 +07:00
|
|
|
#include <linux/ethtool.h>
|
2018-02-14 07:07:48 +07:00
|
|
|
#include <linux/net_tstamp.h>
|
2018-05-11 03:17:32 +07:00
|
|
|
#include <linux/phy.h>
|
2019-01-16 06:06:11 +07:00
|
|
|
#include <linux/platform_data/dsa.h>
|
2019-05-29 00:38:12 +07:00
|
|
|
#include <linux/phylink.h>
|
2017-03-29 04:45:07 +07:00
|
|
|
#include <net/devlink.h>
|
2017-05-18 02:46:04 +07:00
|
|
|
#include <net/switchdev.h>
|
2011-11-25 21:32:52 +07:00
|
|
|
|
2017-01-31 03:41:40 +07:00
|
|
|
struct tc_action;
|
2017-02-08 06:03:05 +07:00
|
|
|
struct phy_device;
|
|
|
|
struct fixed_phy_status;
|
2018-05-11 03:17:32 +07:00
|
|
|
struct phylink_link_state;
|
2017-01-31 03:41:40 +07:00
|
|
|
|
2019-04-29 00:37:12 +07:00
|
|
|
#define DSA_TAG_PROTO_NONE_VALUE 0
|
|
|
|
#define DSA_TAG_PROTO_BRCM_VALUE 1
|
|
|
|
#define DSA_TAG_PROTO_BRCM_PREPEND_VALUE 2
|
|
|
|
#define DSA_TAG_PROTO_DSA_VALUE 3
|
|
|
|
#define DSA_TAG_PROTO_EDSA_VALUE 4
|
|
|
|
#define DSA_TAG_PROTO_GSWIP_VALUE 5
|
|
|
|
#define DSA_TAG_PROTO_KSZ9477_VALUE 6
|
|
|
|
#define DSA_TAG_PROTO_KSZ9893_VALUE 7
|
|
|
|
#define DSA_TAG_PROTO_LAN9303_VALUE 8
|
|
|
|
#define DSA_TAG_PROTO_MTK_VALUE 9
|
|
|
|
#define DSA_TAG_PROTO_QCA_VALUE 10
|
|
|
|
#define DSA_TAG_PROTO_TRAILER_VALUE 11
|
net: dsa: Optional VLAN-based port separation for switches without tagging
This patch provides generic DSA code for using VLAN (802.1Q) tags for
the same purpose as a dedicated switch tag for injection/extraction.
It is based on the discussions and interest that has been so far
expressed in https://www.spinics.net/lists/netdev/msg556125.html.
Unlike all other DSA-supported tagging protocols, CONFIG_NET_DSA_TAG_8021Q
does not offer a complete solution for drivers (nor can it). Instead, it
provides generic code that driver can opt into calling:
- dsa_8021q_xmit: Inserts a VLAN header with the specified contents.
Can be called from another tagging protocol's xmit function.
Currently the LAN9303 driver is inserting headers that are simply
802.1Q with custom fields, so this is an opportunity for code reuse.
- dsa_8021q_rcv: Retrieves the TPID and TCI from a VLAN-tagged skb.
Removing the VLAN header is left as a decision for the caller to make.
- dsa_port_setup_8021q_tagging: For each user port, installs an Rx VID
and a Tx VID, for proper untagged traffic identification on ingress
and steering on egress. Also sets up the VLAN trunk on the upstream
(CPU or DSA) port. Drivers are intentionally left to call this
function explicitly, depending on the context and hardware support.
The expected switch behavior and VLAN semantics should not be violated
under any conditions. That is, after calling
dsa_port_setup_8021q_tagging, the hardware should still pass all
ingress traffic, be it tagged or untagged.
For uniformity with the other tagging protocols, a module for the
dsa_8021q_netdev_ops structure is registered, but the typical usage is
to set up another tagging protocol which selects CONFIG_NET_DSA_TAG_8021Q,
and calls the API from tag_8021q.h. Null function definitions are also
provided so that a "depends on" is not forced in the Kconfig.
This tagging protocol only works when switch ports are standalone, or
when they are added to a VLAN-unaware bridge. It will probably remain
this way for the reasons below.
When added to a bridge that has vlan_filtering 1, the bridge core will
install its own VLANs and reset the pvids through switchdev. For the
bridge core, switchdev is a write-only pipe. All VLAN-related state is
kept in the bridge core and nothing is read from DSA/switchdev or from
the driver. So the bridge core will break this port separation because
it will install the vlan_default_pvid into all switchdev ports.
Even if we could teach the bridge driver about switchdev preference of a
certain vlan_default_pvid (task difficult in itself since the current
setting is per-bridge but we would need it per-port), there would still
exist many other challenges.
Firstly, in the DSA rcv callback, a driver would have to perform an
iterative reverse lookup to find the correct switch port. That is
because the port is a bridge slave, so its Rx VID (port PVID) is subject
to user configuration. How would we ensure that the user doesn't reset
the pvid to a different value (which would make an O(1) translation
impossible), or to a non-unique value within this DSA switch tree (which
would make any translation impossible)?
Finally, not all switch ports are equal in DSA, and that makes it
difficult for the bridge to be completely aware of this anyway.
The CPU port needs to transmit tagged packets (VLAN trunk) in order for
the DSA rcv code to be able to decode source information.
But the bridge code has absolutely no idea which switch port is the CPU
port, if nothing else then just because there is no netdevice registered
by DSA for the CPU port.
Also DSA does not currently allow the user to specify that they want the
CPU port to do VLAN trunking anyway. VLANs are added to the CPU port
using the same flags as they were added on the user port.
So the VLANs installed by dsa_port_setup_8021q_tagging per driver
request should remain private from the bridge's and user's perspective,
and should not alter the VLAN semantics observed by the user.
In the current implementation a VLAN range ending at 4095 (VLAN_N_VID)
is reserved for this purpose. Each port receives a unique Rx VLAN and a
unique Tx VLAN. Separate VLANs are needed for Rx and Tx because they
serve different purposes: on Rx the switch must process traffic as
untagged and process it with a port-based VLAN, but with care not to
hinder bridging. On the other hand, the Tx VLAN is where the
reachability restrictions are imposed, since by tagging frames in the
xmit callback we are telling the switch onto which port to steer the
frame.
Some general guidance on how this support might be employed for
real-life hardware (some comments made by Florian Fainelli):
- If the hardware supports VLAN tag stacking, it should somehow back
up its private VLAN settings when the bridge tries to override them.
Then the driver could re-apply them as outer tags. Dedicating an outer
tag per bridge device would allow identical inner tag VID numbers to
co-exist, yet preserve broadcast domain isolation.
- If the switch cannot handle VLAN tag stacking, it should disable this
port separation when added as slave to a vlan_filtering bridge, in
that case having reduced functionality.
- Drivers for old switches that don't support the entire VLAN_N_VID
range will need to rework the current range selection mechanism.
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-05 17:19:22 +07:00
|
|
|
#define DSA_TAG_PROTO_8021Q_VALUE 12
|
2019-05-05 17:19:27 +07:00
|
|
|
#define DSA_TAG_PROTO_SJA1105_VALUE 13
|
2019-07-30 00:49:46 +07:00
|
|
|
#define DSA_TAG_PROTO_KSZ8795_VALUE 14
|
2019-11-14 22:03:29 +07:00
|
|
|
#define DSA_TAG_PROTO_OCELOT_VALUE 15
|
2019-12-18 15:02:14 +07:00
|
|
|
#define DSA_TAG_PROTO_AR9331_VALUE 16
|
2019-04-29 00:37:12 +07:00
|
|
|
|
2014-09-12 11:18:09 +07:00
|
|
|
enum dsa_tag_protocol {
|
2019-04-29 00:37:12 +07:00
|
|
|
DSA_TAG_PROTO_NONE = DSA_TAG_PROTO_NONE_VALUE,
|
|
|
|
DSA_TAG_PROTO_BRCM = DSA_TAG_PROTO_BRCM_VALUE,
|
|
|
|
DSA_TAG_PROTO_BRCM_PREPEND = DSA_TAG_PROTO_BRCM_PREPEND_VALUE,
|
|
|
|
DSA_TAG_PROTO_DSA = DSA_TAG_PROTO_DSA_VALUE,
|
|
|
|
DSA_TAG_PROTO_EDSA = DSA_TAG_PROTO_EDSA_VALUE,
|
|
|
|
DSA_TAG_PROTO_GSWIP = DSA_TAG_PROTO_GSWIP_VALUE,
|
|
|
|
DSA_TAG_PROTO_KSZ9477 = DSA_TAG_PROTO_KSZ9477_VALUE,
|
|
|
|
DSA_TAG_PROTO_KSZ9893 = DSA_TAG_PROTO_KSZ9893_VALUE,
|
|
|
|
DSA_TAG_PROTO_LAN9303 = DSA_TAG_PROTO_LAN9303_VALUE,
|
|
|
|
DSA_TAG_PROTO_MTK = DSA_TAG_PROTO_MTK_VALUE,
|
|
|
|
DSA_TAG_PROTO_QCA = DSA_TAG_PROTO_QCA_VALUE,
|
|
|
|
DSA_TAG_PROTO_TRAILER = DSA_TAG_PROTO_TRAILER_VALUE,
|
net: dsa: Optional VLAN-based port separation for switches without tagging
This patch provides generic DSA code for using VLAN (802.1Q) tags for
the same purpose as a dedicated switch tag for injection/extraction.
It is based on the discussions and interest that has been so far
expressed in https://www.spinics.net/lists/netdev/msg556125.html.
Unlike all other DSA-supported tagging protocols, CONFIG_NET_DSA_TAG_8021Q
does not offer a complete solution for drivers (nor can it). Instead, it
provides generic code that driver can opt into calling:
- dsa_8021q_xmit: Inserts a VLAN header with the specified contents.
Can be called from another tagging protocol's xmit function.
Currently the LAN9303 driver is inserting headers that are simply
802.1Q with custom fields, so this is an opportunity for code reuse.
- dsa_8021q_rcv: Retrieves the TPID and TCI from a VLAN-tagged skb.
Removing the VLAN header is left as a decision for the caller to make.
- dsa_port_setup_8021q_tagging: For each user port, installs an Rx VID
and a Tx VID, for proper untagged traffic identification on ingress
and steering on egress. Also sets up the VLAN trunk on the upstream
(CPU or DSA) port. Drivers are intentionally left to call this
function explicitly, depending on the context and hardware support.
The expected switch behavior and VLAN semantics should not be violated
under any conditions. That is, after calling
dsa_port_setup_8021q_tagging, the hardware should still pass all
ingress traffic, be it tagged or untagged.
For uniformity with the other tagging protocols, a module for the
dsa_8021q_netdev_ops structure is registered, but the typical usage is
to set up another tagging protocol which selects CONFIG_NET_DSA_TAG_8021Q,
and calls the API from tag_8021q.h. Null function definitions are also
provided so that a "depends on" is not forced in the Kconfig.
This tagging protocol only works when switch ports are standalone, or
when they are added to a VLAN-unaware bridge. It will probably remain
this way for the reasons below.
When added to a bridge that has vlan_filtering 1, the bridge core will
install its own VLANs and reset the pvids through switchdev. For the
bridge core, switchdev is a write-only pipe. All VLAN-related state is
kept in the bridge core and nothing is read from DSA/switchdev or from
the driver. So the bridge core will break this port separation because
it will install the vlan_default_pvid into all switchdev ports.
Even if we could teach the bridge driver about switchdev preference of a
certain vlan_default_pvid (task difficult in itself since the current
setting is per-bridge but we would need it per-port), there would still
exist many other challenges.
Firstly, in the DSA rcv callback, a driver would have to perform an
iterative reverse lookup to find the correct switch port. That is
because the port is a bridge slave, so its Rx VID (port PVID) is subject
to user configuration. How would we ensure that the user doesn't reset
the pvid to a different value (which would make an O(1) translation
impossible), or to a non-unique value within this DSA switch tree (which
would make any translation impossible)?
Finally, not all switch ports are equal in DSA, and that makes it
difficult for the bridge to be completely aware of this anyway.
The CPU port needs to transmit tagged packets (VLAN trunk) in order for
the DSA rcv code to be able to decode source information.
But the bridge code has absolutely no idea which switch port is the CPU
port, if nothing else then just because there is no netdevice registered
by DSA for the CPU port.
Also DSA does not currently allow the user to specify that they want the
CPU port to do VLAN trunking anyway. VLANs are added to the CPU port
using the same flags as they were added on the user port.
So the VLANs installed by dsa_port_setup_8021q_tagging per driver
request should remain private from the bridge's and user's perspective,
and should not alter the VLAN semantics observed by the user.
In the current implementation a VLAN range ending at 4095 (VLAN_N_VID)
is reserved for this purpose. Each port receives a unique Rx VLAN and a
unique Tx VLAN. Separate VLANs are needed for Rx and Tx because they
serve different purposes: on Rx the switch must process traffic as
untagged and process it with a port-based VLAN, but with care not to
hinder bridging. On the other hand, the Tx VLAN is where the
reachability restrictions are imposed, since by tagging frames in the
xmit callback we are telling the switch onto which port to steer the
frame.
Some general guidance on how this support might be employed for
real-life hardware (some comments made by Florian Fainelli):
- If the hardware supports VLAN tag stacking, it should somehow back
up its private VLAN settings when the bridge tries to override them.
Then the driver could re-apply them as outer tags. Dedicating an outer
tag per bridge device would allow identical inner tag VID numbers to
co-exist, yet preserve broadcast domain isolation.
- If the switch cannot handle VLAN tag stacking, it should disable this
port separation when added as slave to a vlan_filtering bridge, in
that case having reduced functionality.
- Drivers for old switches that don't support the entire VLAN_N_VID
range will need to rework the current range selection mechanism.
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-05 17:19:22 +07:00
|
|
|
DSA_TAG_PROTO_8021Q = DSA_TAG_PROTO_8021Q_VALUE,
|
2019-05-05 17:19:27 +07:00
|
|
|
DSA_TAG_PROTO_SJA1105 = DSA_TAG_PROTO_SJA1105_VALUE,
|
2019-07-30 00:49:46 +07:00
|
|
|
DSA_TAG_PROTO_KSZ8795 = DSA_TAG_PROTO_KSZ8795_VALUE,
|
2019-11-14 22:03:29 +07:00
|
|
|
DSA_TAG_PROTO_OCELOT = DSA_TAG_PROTO_OCELOT_VALUE,
|
2019-12-18 15:02:14 +07:00
|
|
|
DSA_TAG_PROTO_AR9331 = DSA_TAG_PROTO_AR9331_VALUE,
|
2014-09-12 11:18:09 +07:00
|
|
|
};
|
2014-08-28 07:04:55 +07:00
|
|
|
|
2014-09-16 00:00:19 +07:00
|
|
|
struct packet_type;
|
2018-02-14 07:07:49 +07:00
|
|
|
struct dsa_switch;
|
2014-08-28 07:04:46 +07:00
|
|
|
|
2017-08-09 19:41:16 +07:00
|
|
|
struct dsa_device_ops {
|
|
|
|
struct sk_buff *(*xmit)(struct sk_buff *skb, struct net_device *dev);
|
|
|
|
struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev,
|
2017-08-17 21:47:00 +07:00
|
|
|
struct packet_type *pt);
|
2017-08-09 19:41:17 +07:00
|
|
|
int (*flow_dissect)(const struct sk_buff *skb, __be16 *proto,
|
|
|
|
int *offset);
|
net: dsa: Allow drivers to filter packets they can decode source port from
Frames get processed by DSA and redirected to switch port net devices
based on the ETH_P_XDSA multiplexed packet_type handler found by the
network stack when calling eth_type_trans().
The running assumption is that once the DSA .rcv function is called, DSA
is always able to decode the switch tag in order to change the skb->dev
from its master.
However there are tagging protocols (such as the new DSA_TAG_PROTO_SJA1105,
user of DSA_TAG_PROTO_8021Q) where this assumption is not completely
true, since switch tagging piggybacks on the absence of a vlan_filtering
bridge. Moreover, management traffic (BPDU, PTP) for this switch doesn't
rely on switch tagging, but on a different mechanism. So it would make
sense to at least be able to terminate that.
Having DSA receive traffic it can't decode would put it in an impossible
situation: the eth_type_trans() function would invoke the DSA .rcv(),
which could not change skb->dev, then eth_type_trans() would be invoked
again, which again would call the DSA .rcv, and the packet would never
be able to exit the DSA filter and would spiral in a loop until the
whole system dies.
This happens because eth_type_trans() doesn't actually look at the skb
(so as to identify a potential tag) when it deems it as being
ETH_P_XDSA. It just checks whether skb->dev has a DSA private pointer
installed (therefore it's a DSA master) and that there exists a .rcv
callback (everybody except DSA_TAG_PROTO_NONE has that). This is
understandable as there are many switch tags out there, and exhaustively
checking for all of them is far from ideal.
The solution lies in introducing a filtering function for each tagging
protocol. In the absence of a filtering function, all traffic is passed
to the .rcv DSA callback. The tagging protocol should see the filtering
function as a pre-validation that it can decode the incoming skb. The
traffic that doesn't match the filter will bypass the DSA .rcv callback
and be left on the master netdevice, which wasn't previously possible.
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-05 17:19:23 +07:00
|
|
|
/* Used to determine which traffic should match the DSA filter in
|
|
|
|
* eth_type_trans, and which, if any, should bypass it and be processed
|
|
|
|
* as regular on the master net device.
|
|
|
|
*/
|
|
|
|
bool (*filter)(const struct sk_buff *skb, struct net_device *dev);
|
2018-12-06 17:36:04 +07:00
|
|
|
unsigned int overhead;
|
2019-04-29 00:37:11 +07:00
|
|
|
const char *name;
|
2019-04-29 00:37:14 +07:00
|
|
|
enum dsa_tag_protocol proto;
|
2017-08-09 19:41:16 +07:00
|
|
|
};
|
|
|
|
|
2019-04-29 00:37:12 +07:00
|
|
|
#define DSA_TAG_DRIVER_ALIAS "dsa_tag-"
|
|
|
|
#define MODULE_ALIAS_DSA_TAG_DRIVER(__proto) \
|
|
|
|
MODULE_ALIAS(DSA_TAG_DRIVER_ALIAS __stringify(__proto##_VALUE))
|
|
|
|
|
2019-05-05 17:19:24 +07:00
|
|
|
struct dsa_skb_cb {
|
|
|
|
struct sk_buff *clone;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct __dsa_skb_cb {
|
|
|
|
struct dsa_skb_cb cb;
|
|
|
|
u8 priv[48 - sizeof(struct dsa_skb_cb)];
|
|
|
|
};
|
|
|
|
|
|
|
|
#define DSA_SKB_CB(skb) ((struct dsa_skb_cb *)((skb)->cb))
|
|
|
|
|
|
|
|
#define DSA_SKB_CB_PRIV(skb) \
|
|
|
|
((void *)(skb)->cb + offsetof(struct __dsa_skb_cb, priv))
|
|
|
|
|
2011-11-25 21:32:52 +07:00
|
|
|
struct dsa_switch_tree {
|
2016-06-05 02:17:07 +07:00
|
|
|
struct list_head list;
|
|
|
|
|
2017-02-04 01:20:20 +07:00
|
|
|
/* Notifier chain for switch-wide events */
|
|
|
|
struct raw_notifier_head nh;
|
|
|
|
|
2016-06-05 02:17:07 +07:00
|
|
|
/* Tree identifier */
|
2017-11-04 06:05:21 +07:00
|
|
|
unsigned int index;
|
2016-06-05 02:17:07 +07:00
|
|
|
|
|
|
|
/* Number of switches attached to this tree */
|
|
|
|
struct kref refcount;
|
|
|
|
|
|
|
|
/* Has this tree been applied to the hardware? */
|
2017-11-07 04:11:46 +07:00
|
|
|
bool setup;
|
2016-06-05 02:17:07 +07:00
|
|
|
|
2011-11-25 21:32:52 +07:00
|
|
|
/*
|
|
|
|
* Configuration data for the platform device that owns
|
|
|
|
* this dsa switch tree instance.
|
|
|
|
*/
|
|
|
|
struct dsa_platform_data *pd;
|
|
|
|
|
2019-10-22 03:51:16 +07:00
|
|
|
/* List of switch ports */
|
|
|
|
struct list_head ports;
|
|
|
|
|
2019-10-31 09:09:13 +07:00
|
|
|
/* List of DSA links composing the routing table */
|
|
|
|
struct list_head rtable;
|
2011-11-25 21:32:52 +07:00
|
|
|
};
|
|
|
|
|
2020-03-29 18:51:59 +07:00
|
|
|
/* TC matchall action types */
|
2017-01-31 03:41:40 +07:00
|
|
|
enum dsa_port_mall_action_type {
|
|
|
|
DSA_PORT_MALL_MIRROR,
|
2020-03-29 18:51:59 +07:00
|
|
|
DSA_PORT_MALL_POLICER,
|
2017-01-31 03:41:40 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
/* TC mirroring entry */
|
|
|
|
struct dsa_mall_mirror_tc_entry {
|
|
|
|
u8 to_local_port;
|
|
|
|
bool ingress;
|
|
|
|
};
|
|
|
|
|
2020-03-29 18:51:59 +07:00
|
|
|
/* TC port policer entry */
|
|
|
|
struct dsa_mall_policer_tc_entry {
|
|
|
|
s64 burst;
|
|
|
|
u64 rate_bytes_per_sec;
|
|
|
|
};
|
|
|
|
|
2017-01-31 03:41:40 +07:00
|
|
|
/* TC matchall entry */
|
|
|
|
struct dsa_mall_tc_entry {
|
|
|
|
struct list_head list;
|
|
|
|
unsigned long cookie;
|
|
|
|
enum dsa_port_mall_action_type type;
|
|
|
|
union {
|
|
|
|
struct dsa_mall_mirror_tc_entry mirror;
|
2020-03-29 18:51:59 +07:00
|
|
|
struct dsa_mall_policer_tc_entry policer;
|
2017-01-31 03:41:40 +07:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2016-06-05 02:16:57 +07:00
|
|
|
struct dsa_port {
|
2017-10-16 22:12:18 +07:00
|
|
|
/* A CPU port is physically connected to a master device.
|
|
|
|
* A user port exposed to userspace has a slave device.
|
|
|
|
*/
|
|
|
|
union {
|
|
|
|
struct net_device *master;
|
|
|
|
struct net_device *slave;
|
|
|
|
};
|
|
|
|
|
2017-09-30 04:19:18 +07:00
|
|
|
/* CPU port tagging operations used by master or slave devices */
|
|
|
|
const struct dsa_device_ops *tag_ops;
|
|
|
|
|
2017-09-30 04:19:19 +07:00
|
|
|
/* Copies for faster access in master receive hot path */
|
|
|
|
struct dsa_switch_tree *dst;
|
|
|
|
struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
struct packet_type *pt);
|
net: dsa: Allow drivers to filter packets they can decode source port from
Frames get processed by DSA and redirected to switch port net devices
based on the ETH_P_XDSA multiplexed packet_type handler found by the
network stack when calling eth_type_trans().
The running assumption is that once the DSA .rcv function is called, DSA
is always able to decode the switch tag in order to change the skb->dev
from its master.
However there are tagging protocols (such as the new DSA_TAG_PROTO_SJA1105,
user of DSA_TAG_PROTO_8021Q) where this assumption is not completely
true, since switch tagging piggybacks on the absence of a vlan_filtering
bridge. Moreover, management traffic (BPDU, PTP) for this switch doesn't
rely on switch tagging, but on a different mechanism. So it would make
sense to at least be able to terminate that.
Having DSA receive traffic it can't decode would put it in an impossible
situation: the eth_type_trans() function would invoke the DSA .rcv(),
which could not change skb->dev, then eth_type_trans() would be invoked
again, which again would call the DSA .rcv, and the packet would never
be able to exit the DSA filter and would spiral in a loop until the
whole system dies.
This happens because eth_type_trans() doesn't actually look at the skb
(so as to identify a potential tag) when it deems it as being
ETH_P_XDSA. It just checks whether skb->dev has a DSA private pointer
installed (therefore it's a DSA master) and that there exists a .rcv
callback (everybody except DSA_TAG_PROTO_NONE has that). This is
understandable as there are many switch tags out there, and exhaustively
checking for all of them is far from ideal.
The solution lies in introducing a filtering function for each tagging
protocol. In the absence of a filtering function, all traffic is passed
to the .rcv DSA callback. The tagging protocol should see the filtering
function as a pre-validation that it can decode the incoming skb. The
traffic that doesn't match the filter will bypass the DSA .rcv callback
and be left on the master netdevice, which wasn't previously possible.
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-05 17:19:23 +07:00
|
|
|
bool (*filter)(const struct sk_buff *skb, struct net_device *dev);
|
2017-09-30 04:19:19 +07:00
|
|
|
|
2017-10-26 22:22:57 +07:00
|
|
|
enum {
|
|
|
|
DSA_PORT_TYPE_UNUSED = 0,
|
|
|
|
DSA_PORT_TYPE_CPU,
|
|
|
|
DSA_PORT_TYPE_DSA,
|
|
|
|
DSA_PORT_TYPE_USER,
|
|
|
|
} type;
|
|
|
|
|
2017-01-28 03:29:38 +07:00
|
|
|
struct dsa_switch *ds;
|
|
|
|
unsigned int index;
|
2017-02-05 04:02:43 +07:00
|
|
|
const char *name;
|
2019-06-15 00:49:20 +07:00
|
|
|
struct dsa_port *cpu_dp;
|
2019-03-29 12:34:58 +07:00
|
|
|
const char *mac;
|
2016-06-05 02:16:58 +07:00
|
|
|
struct device_node *dn;
|
2016-07-19 07:45:38 +07:00
|
|
|
unsigned int ageing_time;
|
2019-04-29 01:45:43 +07:00
|
|
|
bool vlan_filtering;
|
2016-09-23 03:49:22 +07:00
|
|
|
u8 stp_state;
|
2017-01-28 03:29:40 +07:00
|
|
|
struct net_device *bridge_dev;
|
2017-03-29 04:45:07 +07:00
|
|
|
struct devlink_port devlink_port;
|
2018-05-11 03:17:36 +07:00
|
|
|
struct phylink *pl;
|
2019-05-29 00:38:12 +07:00
|
|
|
struct phylink_config pl_config;
|
2019-05-05 17:19:25 +07:00
|
|
|
|
2019-10-22 03:51:16 +07:00
|
|
|
struct list_head list;
|
|
|
|
|
2019-05-05 17:19:26 +07:00
|
|
|
/*
|
|
|
|
* Give the switch driver somewhere to hang its per-port private data
|
|
|
|
* structures (accessible from the tagger).
|
|
|
|
*/
|
|
|
|
void *priv;
|
|
|
|
|
2017-06-14 03:27:20 +07:00
|
|
|
/*
|
|
|
|
* Original copy of the master netdev ethtool_ops
|
|
|
|
*/
|
|
|
|
const struct ethtool_ops *orig_ethtool_ops;
|
2019-01-16 05:43:04 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Original copy of the master netdev net_device_ops
|
|
|
|
*/
|
|
|
|
const struct net_device_ops *orig_ndo_ops;
|
2019-10-22 03:51:19 +07:00
|
|
|
|
|
|
|
bool setup;
|
2016-06-05 02:16:57 +07:00
|
|
|
};
|
|
|
|
|
2019-10-31 09:09:13 +07:00
|
|
|
/* TODO: ideally DSA ports would have a single dp->link_dp member,
|
|
|
|
* and no dst->rtable nor this struct dsa_link would be needed,
|
|
|
|
* but this would require some more complex tree walking,
|
|
|
|
* so keep it stupid at the moment and list them all.
|
|
|
|
*/
|
|
|
|
struct dsa_link {
|
|
|
|
struct dsa_port *dp;
|
|
|
|
struct dsa_port *link_dp;
|
|
|
|
struct list_head list;
|
|
|
|
};
|
|
|
|
|
2011-11-28 00:06:08 +07:00
|
|
|
struct dsa_switch {
|
2019-10-22 03:51:19 +07:00
|
|
|
bool setup;
|
|
|
|
|
2016-05-11 04:27:23 +07:00
|
|
|
struct device *dev;
|
|
|
|
|
2011-11-28 00:06:08 +07:00
|
|
|
/*
|
|
|
|
* Parent switch tree, and switch index.
|
|
|
|
*/
|
|
|
|
struct dsa_switch_tree *dst;
|
2017-11-04 06:05:20 +07:00
|
|
|
unsigned int index;
|
2011-11-28 00:06:08 +07:00
|
|
|
|
2017-02-04 01:20:20 +07:00
|
|
|
/* Listener for switch fabric events */
|
|
|
|
struct notifier_block nb;
|
|
|
|
|
2016-04-13 07:40:40 +07:00
|
|
|
/*
|
|
|
|
* Give the switch driver somewhere to hang its private data
|
|
|
|
* structure.
|
|
|
|
*/
|
|
|
|
void *priv;
|
|
|
|
|
2011-11-28 00:06:08 +07:00
|
|
|
/*
|
|
|
|
* Configuration data for this switch.
|
|
|
|
*/
|
2016-05-11 04:27:24 +07:00
|
|
|
struct dsa_chip_data *cd;
|
2011-11-28 00:06:08 +07:00
|
|
|
|
|
|
|
/*
|
2016-08-23 23:38:56 +07:00
|
|
|
* The switch operations.
|
2011-11-28 00:06:08 +07:00
|
|
|
*/
|
2017-01-09 05:52:08 +07:00
|
|
|
const struct dsa_switch_ops *ops;
|
2011-11-28 00:06:08 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Slave mii_bus and devices for the individual ports.
|
|
|
|
*/
|
2014-08-28 07:04:51 +07:00
|
|
|
u32 phys_mii_mask;
|
2011-11-28 00:06:08 +07:00
|
|
|
struct mii_bus *slave_mii_bus;
|
2017-01-28 03:29:36 +07:00
|
|
|
|
2017-03-16 02:53:49 +07:00
|
|
|
/* Ageing Time limits in msecs */
|
|
|
|
unsigned int ageing_time_min;
|
|
|
|
unsigned int ageing_time_max;
|
|
|
|
|
2017-03-29 04:45:07 +07:00
|
|
|
/* devlink used to represent this switch device */
|
|
|
|
struct devlink *devlink;
|
|
|
|
|
2017-09-04 10:27:00 +07:00
|
|
|
/* Number of switch port queues */
|
|
|
|
unsigned int num_tx_queues;
|
|
|
|
|
2019-04-29 01:45:44 +07:00
|
|
|
/* Disallow bridge core from requesting different VLAN awareness
|
|
|
|
* settings on ports if not hardware-supported
|
|
|
|
*/
|
|
|
|
bool vlan_filtering_is_global;
|
|
|
|
|
2019-04-29 01:45:48 +07:00
|
|
|
/* In case vlan_filtering_is_global is set, the VLAN awareness state
|
|
|
|
* should be retrieved from here and not from the per-port settings.
|
|
|
|
*/
|
|
|
|
bool vlan_filtering;
|
|
|
|
|
2020-01-06 08:34:12 +07:00
|
|
|
/* MAC PCS does not provide link state change interrupt, and requires
|
|
|
|
* polling. Flag passed on to PHYLINK.
|
|
|
|
*/
|
|
|
|
bool pcs_poll;
|
|
|
|
|
net: dsa: implement auto-normalization of MTU for bridge hardware datapath
Many switches don't have an explicit knob for configuring the MTU
(maximum transmission unit per interface). Instead, they do the
length-based packet admission checks on the ingress interface, for
reasons that are easy to understand (why would you accept a packet in
the queuing subsystem if you know you're going to drop it anyway).
So it is actually the MRU that these switches permit configuring.
In Linux there only exists the IFLA_MTU netlink attribute and the
associated dev_set_mtu function. The comments like to play blind and say
that it's changing the "maximum transfer unit", which is to say that
there isn't any directionality in the meaning of the MTU word. So that
is the interpretation that this patch is giving to things: MTU == MRU.
When 2 interfaces having different MTUs are bridged, the bridge driver
MTU auto-adjustment logic kicks in: what br_mtu_auto_adjust() does is it
adjusts the MTU of the bridge net device itself (and not that of the
slave net devices) to the minimum value of all slave interfaces, in
order for forwarded packets to not exceed the MTU regardless of the
interface they are received and send on.
The idea behind this behavior, and why the slave MTUs are not adjusted,
is that normal termination from Linux over the L2 forwarding domain
should happen over the bridge net device, which _is_ properly limited by
the minimum MTU. And termination over individual slave devices is
possible even if those are bridged. But that is not "forwarding", so
there's no reason to do normalization there, since only a single
interface sees that packet.
The problem with those switches that can only control the MRU is with
the offloaded data path, where a packet received on an interface with
MRU 9000 would still be forwarded to an interface with MRU 1500. And the
br_mtu_auto_adjust() function does not really help, since the MTU
configured on the bridge net device is ignored.
In order to enforce the de-facto MTU == MRU rule for these switches, we
need to do MTU normalization, which means: in order for no packet larger
than the MTU configured on this port to be sent, then we need to limit
the MRU on all ports that this packet could possibly come from. AKA
since we are configuring the MRU via MTU, it means that all ports within
a bridge forwarding domain should have the same MTU.
And that is exactly what this patch is trying to do.
>From an implementation perspective, we try to follow the intent of the
user, otherwise there is a risk that we might livelock them (they try to
change the MTU on an already-bridged interface, but we just keep
changing it back in an attempt to keep the MTU normalized). So the MTU
that the bridge is normalized to is either:
- The most recently changed one:
ip link set dev swp0 master br0
ip link set dev swp1 master br0
ip link set dev swp0 mtu 1400
This sequence will make swp1 inherit MTU 1400 from swp0.
- The one of the most recently added interface to the bridge:
ip link set dev swp0 master br0
ip link set dev swp1 mtu 1400
ip link set dev swp1 master br0
The above sequence will make swp0 inherit MTU 1400 as well.
Suggested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-28 02:55:43 +07:00
|
|
|
/* For switches that only have the MRU configurable. To ensure the
|
|
|
|
* configured MTU is not exceeded, normalization of MRU on all bridged
|
|
|
|
* interfaces is needed.
|
|
|
|
*/
|
|
|
|
bool mtu_enforcement_ingress;
|
|
|
|
|
2017-01-28 03:29:36 +07:00
|
|
|
size_t num_ports;
|
2011-11-28 00:06:08 +07:00
|
|
|
};
|
|
|
|
|
2019-10-22 03:51:15 +07:00
|
|
|
static inline struct dsa_port *dsa_to_port(struct dsa_switch *ds, int p)
|
2017-10-26 22:22:51 +07:00
|
|
|
{
|
2019-10-22 03:51:17 +07:00
|
|
|
struct dsa_switch_tree *dst = ds->dst;
|
2019-10-26 01:48:53 +07:00
|
|
|
struct dsa_port *dp;
|
2019-10-22 03:51:17 +07:00
|
|
|
|
|
|
|
list_for_each_entry(dp, &dst->ports, list)
|
|
|
|
if (dp->ds == ds && dp->index == p)
|
2019-10-26 01:48:53 +07:00
|
|
|
return dp;
|
2019-10-22 03:51:17 +07:00
|
|
|
|
2019-10-26 01:48:53 +07:00
|
|
|
return NULL;
|
2017-10-26 22:22:58 +07:00
|
|
|
}
|
2017-10-26 22:22:51 +07:00
|
|
|
|
2017-10-26 22:22:58 +07:00
|
|
|
static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p)
|
|
|
|
{
|
|
|
|
return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_UNUSED;
|
2017-10-26 22:22:51 +07:00
|
|
|
}
|
|
|
|
|
2011-11-28 00:06:08 +07:00
|
|
|
static inline bool dsa_is_cpu_port(struct dsa_switch *ds, int p)
|
|
|
|
{
|
2017-10-26 22:22:58 +07:00
|
|
|
return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_CPU;
|
2011-11-28 00:06:08 +07:00
|
|
|
}
|
|
|
|
|
2015-08-18 04:52:51 +07:00
|
|
|
static inline bool dsa_is_dsa_port(struct dsa_switch *ds, int p)
|
|
|
|
{
|
2017-10-26 22:22:58 +07:00
|
|
|
return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_DSA;
|
2015-08-18 04:52:51 +07:00
|
|
|
}
|
|
|
|
|
2017-10-26 22:22:54 +07:00
|
|
|
static inline bool dsa_is_user_port(struct dsa_switch *ds, int p)
|
2017-03-12 04:12:58 +07:00
|
|
|
{
|
2017-10-26 22:22:58 +07:00
|
|
|
return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_USER;
|
2017-03-12 04:12:58 +07:00
|
|
|
}
|
|
|
|
|
2017-10-26 22:22:56 +07:00
|
|
|
static inline u32 dsa_user_ports(struct dsa_switch *ds)
|
|
|
|
{
|
2017-10-26 22:22:58 +07:00
|
|
|
u32 mask = 0;
|
|
|
|
int p;
|
2017-10-26 22:22:56 +07:00
|
|
|
|
2017-10-26 22:22:58 +07:00
|
|
|
for (p = 0; p < ds->num_ports; p++)
|
|
|
|
if (dsa_is_user_port(ds, p))
|
|
|
|
mask |= BIT(p);
|
|
|
|
|
|
|
|
return mask;
|
2017-10-16 22:12:19 +07:00
|
|
|
}
|
|
|
|
|
2019-10-31 09:09:13 +07:00
|
|
|
/* Return the local port used to reach an arbitrary switch device */
|
|
|
|
static inline unsigned int dsa_routing_port(struct dsa_switch *ds, int device)
|
|
|
|
{
|
|
|
|
struct dsa_switch_tree *dst = ds->dst;
|
|
|
|
struct dsa_link *dl;
|
|
|
|
|
|
|
|
list_for_each_entry(dl, &dst->rtable, list)
|
|
|
|
if (dl->dp->ds == ds && dl->link_dp->ds->index == device)
|
|
|
|
return dl->dp->index;
|
|
|
|
|
|
|
|
return ds->num_ports;
|
|
|
|
}
|
|
|
|
|
2017-12-01 00:56:42 +07:00
|
|
|
/* Return the local port used to reach an arbitrary switch port */
|
|
|
|
static inline unsigned int dsa_towards_port(struct dsa_switch *ds, int device,
|
|
|
|
int port)
|
|
|
|
{
|
|
|
|
if (device == ds->index)
|
|
|
|
return port;
|
|
|
|
else
|
2019-10-31 09:09:13 +07:00
|
|
|
return dsa_routing_port(ds, device);
|
2017-12-01 00:56:42 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the local port used to reach the dedicated CPU port */
|
2017-12-06 03:34:13 +07:00
|
|
|
static inline unsigned int dsa_upstream_port(struct dsa_switch *ds, int port)
|
2011-11-28 00:06:08 +07:00
|
|
|
{
|
2017-12-06 03:34:13 +07:00
|
|
|
const struct dsa_port *dp = dsa_to_port(ds, port);
|
|
|
|
const struct dsa_port *cpu_dp = dp->cpu_dp;
|
|
|
|
|
|
|
|
if (!cpu_dp)
|
|
|
|
return port;
|
2011-11-28 00:06:08 +07:00
|
|
|
|
2017-12-01 00:56:42 +07:00
|
|
|
return dsa_towards_port(ds, cpu_dp->ds->index, cpu_dp->index);
|
2011-11-28 00:06:08 +07:00
|
|
|
}
|
|
|
|
|
2019-04-29 01:45:49 +07:00
|
|
|
static inline bool dsa_port_is_vlan_filtering(const struct dsa_port *dp)
|
|
|
|
{
|
|
|
|
const struct dsa_switch *ds = dp->ds;
|
|
|
|
|
|
|
|
if (ds->vlan_filtering_is_global)
|
|
|
|
return ds->vlan_filtering;
|
|
|
|
else
|
|
|
|
return dp->vlan_filtering;
|
|
|
|
}
|
|
|
|
|
2017-08-06 20:15:49 +07:00
|
|
|
typedef int dsa_fdb_dump_cb_t(const unsigned char *addr, u16 vid,
|
|
|
|
bool is_static, void *data);
|
2016-08-23 23:38:56 +07:00
|
|
|
struct dsa_switch_ops {
|
2017-11-11 06:22:52 +07:00
|
|
|
enum dsa_tag_protocol (*get_tag_protocol)(struct dsa_switch *ds,
|
2020-01-08 12:06:05 +07:00
|
|
|
int port,
|
|
|
|
enum dsa_tag_protocol mprot);
|
2016-08-22 21:01:01 +07:00
|
|
|
|
2011-11-28 00:06:08 +07:00
|
|
|
int (*setup)(struct dsa_switch *ds);
|
2019-06-08 19:04:28 +07:00
|
|
|
void (*teardown)(struct dsa_switch *ds);
|
2014-09-20 03:07:54 +07:00
|
|
|
u32 (*get_phy_flags)(struct dsa_switch *ds, int port);
|
2011-11-28 00:06:08 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Access to the switch's PHY registers.
|
|
|
|
*/
|
|
|
|
int (*phy_read)(struct dsa_switch *ds, int port, int regnum);
|
|
|
|
int (*phy_write)(struct dsa_switch *ds, int port,
|
|
|
|
int regnum, u16 val);
|
|
|
|
|
2014-08-28 07:04:53 +07:00
|
|
|
/*
|
|
|
|
* Link state adjustment (called from libphy)
|
|
|
|
*/
|
|
|
|
void (*adjust_link)(struct dsa_switch *ds, int port,
|
|
|
|
struct phy_device *phydev);
|
2014-08-28 07:04:54 +07:00
|
|
|
void (*fixed_link_update)(struct dsa_switch *ds, int port,
|
|
|
|
struct fixed_phy_status *st);
|
2014-08-28 07:04:53 +07:00
|
|
|
|
2018-05-11 03:17:32 +07:00
|
|
|
/*
|
|
|
|
* PHYLINK integration
|
|
|
|
*/
|
|
|
|
void (*phylink_validate)(struct dsa_switch *ds, int port,
|
|
|
|
unsigned long *supported,
|
|
|
|
struct phylink_link_state *state);
|
|
|
|
int (*phylink_mac_link_state)(struct dsa_switch *ds, int port,
|
|
|
|
struct phylink_link_state *state);
|
|
|
|
void (*phylink_mac_config)(struct dsa_switch *ds, int port,
|
|
|
|
unsigned int mode,
|
|
|
|
const struct phylink_link_state *state);
|
|
|
|
void (*phylink_mac_an_restart)(struct dsa_switch *ds, int port);
|
|
|
|
void (*phylink_mac_link_down)(struct dsa_switch *ds, int port,
|
|
|
|
unsigned int mode,
|
|
|
|
phy_interface_t interface);
|
|
|
|
void (*phylink_mac_link_up)(struct dsa_switch *ds, int port,
|
|
|
|
unsigned int mode,
|
|
|
|
phy_interface_t interface,
|
2020-02-26 17:23:46 +07:00
|
|
|
struct phy_device *phydev,
|
|
|
|
int speed, int duplex,
|
|
|
|
bool tx_pause, bool rx_pause);
|
2018-05-11 03:17:32 +07:00
|
|
|
void (*phylink_fixed_state)(struct dsa_switch *ds, int port,
|
|
|
|
struct phylink_link_state *state);
|
2011-11-28 00:06:08 +07:00
|
|
|
/*
|
|
|
|
* ethtool hardware statistics.
|
|
|
|
*/
|
2018-04-26 02:12:50 +07:00
|
|
|
void (*get_strings)(struct dsa_switch *ds, int port,
|
|
|
|
u32 stringset, uint8_t *data);
|
2011-11-28 00:06:08 +07:00
|
|
|
void (*get_ethtool_stats)(struct dsa_switch *ds,
|
|
|
|
int port, uint64_t *data);
|
2018-04-26 02:12:50 +07:00
|
|
|
int (*get_sset_count)(struct dsa_switch *ds, int port, int sset);
|
2018-04-26 02:12:52 +07:00
|
|
|
void (*get_ethtool_phy_stats)(struct dsa_switch *ds,
|
|
|
|
int port, uint64_t *data);
|
2014-09-19 07:31:22 +07:00
|
|
|
|
2014-09-19 07:31:24 +07:00
|
|
|
/*
|
|
|
|
* ethtool Wake-on-LAN
|
|
|
|
*/
|
|
|
|
void (*get_wol)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_wolinfo *w);
|
|
|
|
int (*set_wol)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_wolinfo *w);
|
|
|
|
|
2018-02-14 07:07:48 +07:00
|
|
|
/*
|
|
|
|
* ethtool timestamp info
|
|
|
|
*/
|
|
|
|
int (*get_ts_info)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_ts_info *ts);
|
|
|
|
|
2014-09-19 07:31:22 +07:00
|
|
|
/*
|
|
|
|
* Suspend and resume
|
|
|
|
*/
|
|
|
|
int (*suspend)(struct dsa_switch *ds);
|
|
|
|
int (*resume)(struct dsa_switch *ds);
|
2014-09-25 07:05:18 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Port enable/disable
|
|
|
|
*/
|
|
|
|
int (*port_enable)(struct dsa_switch *ds, int port,
|
|
|
|
struct phy_device *phy);
|
2019-02-25 02:44:43 +07:00
|
|
|
void (*port_disable)(struct dsa_switch *ds, int port);
|
2014-09-25 07:05:21 +07:00
|
|
|
|
|
|
|
/*
|
2017-08-02 03:32:41 +07:00
|
|
|
* Port's MAC EEE settings
|
2014-09-25 07:05:21 +07:00
|
|
|
*/
|
2017-08-02 03:32:41 +07:00
|
|
|
int (*set_mac_eee)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_eee *e);
|
|
|
|
int (*get_mac_eee)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_eee *e);
|
2014-10-30 00:44:58 +07:00
|
|
|
|
2014-10-30 00:45:01 +07:00
|
|
|
/* EEPROM access */
|
|
|
|
int (*get_eeprom_len)(struct dsa_switch *ds);
|
|
|
|
int (*get_eeprom)(struct dsa_switch *ds,
|
|
|
|
struct ethtool_eeprom *eeprom, u8 *data);
|
|
|
|
int (*set_eeprom)(struct dsa_switch *ds,
|
|
|
|
struct ethtool_eeprom *eeprom, u8 *data);
|
2014-10-30 00:45:04 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Register access.
|
|
|
|
*/
|
|
|
|
int (*get_regs_len)(struct dsa_switch *ds, int port);
|
|
|
|
void (*get_regs)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_regs *regs, void *p);
|
2015-02-25 04:15:33 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Bridge integration
|
|
|
|
*/
|
2016-07-19 07:45:38 +07:00
|
|
|
int (*set_ageing_time)(struct dsa_switch *ds, unsigned int msecs);
|
2016-03-14 03:21:32 +07:00
|
|
|
int (*port_bridge_join)(struct dsa_switch *ds, int port,
|
2016-02-13 00:09:39 +07:00
|
|
|
struct net_device *bridge);
|
2017-01-28 03:29:41 +07:00
|
|
|
void (*port_bridge_leave)(struct dsa_switch *ds, int port,
|
|
|
|
struct net_device *bridge);
|
2016-04-06 22:55:03 +07:00
|
|
|
void (*port_stp_state_set)(struct dsa_switch *ds, int port,
|
|
|
|
u8 state);
|
2016-09-23 03:49:22 +07:00
|
|
|
void (*port_fast_age)(struct dsa_switch *ds, int port);
|
2019-02-21 06:35:04 +07:00
|
|
|
int (*port_egress_floods)(struct dsa_switch *ds, int port,
|
|
|
|
bool unicast, bool multicast);
|
2015-08-10 20:09:49 +07:00
|
|
|
|
2015-08-13 23:52:17 +07:00
|
|
|
/*
|
|
|
|
* VLAN support
|
|
|
|
*/
|
2016-02-27 01:16:00 +07:00
|
|
|
int (*port_vlan_filtering)(struct dsa_switch *ds, int port,
|
|
|
|
bool vlan_filtering);
|
2017-11-30 23:23:57 +07:00
|
|
|
int (*port_vlan_prepare)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_vlan *vlan);
|
|
|
|
void (*port_vlan_add)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_vlan *vlan);
|
2015-11-02 00:33:55 +07:00
|
|
|
int (*port_vlan_del)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_vlan *vlan);
|
2015-08-10 20:09:49 +07:00
|
|
|
/*
|
|
|
|
* Forwarding database
|
|
|
|
*/
|
2017-08-06 20:15:40 +07:00
|
|
|
int (*port_fdb_add)(struct dsa_switch *ds, int port,
|
2017-08-06 20:15:39 +07:00
|
|
|
const unsigned char *addr, u16 vid);
|
2015-08-10 20:09:49 +07:00
|
|
|
int (*port_fdb_del)(struct dsa_switch *ds, int port,
|
2017-08-06 20:15:39 +07:00
|
|
|
const unsigned char *addr, u16 vid);
|
2015-10-22 20:34:38 +07:00
|
|
|
int (*port_fdb_dump)(struct dsa_switch *ds, int port,
|
2017-08-06 20:15:49 +07:00
|
|
|
dsa_fdb_dump_cb_t *cb, void *data);
|
2016-08-31 22:50:03 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Multicast database
|
|
|
|
*/
|
2017-11-30 23:23:58 +07:00
|
|
|
int (*port_mdb_prepare)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_mdb *mdb);
|
|
|
|
void (*port_mdb_add)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_mdb *mdb);
|
2016-08-31 22:50:03 +07:00
|
|
|
int (*port_mdb_del)(struct dsa_switch *ds, int port,
|
|
|
|
const struct switchdev_obj_port_mdb *mdb);
|
2017-01-31 00:48:40 +07:00
|
|
|
/*
|
|
|
|
* RXNFC
|
|
|
|
*/
|
|
|
|
int (*get_rxnfc)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_rxnfc *nfc, u32 *rule_locs);
|
|
|
|
int (*set_rxnfc)(struct dsa_switch *ds, int port,
|
|
|
|
struct ethtool_rxnfc *nfc);
|
2017-01-31 03:41:40 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TC integration
|
|
|
|
*/
|
2020-02-29 21:31:13 +07:00
|
|
|
int (*cls_flower_add)(struct dsa_switch *ds, int port,
|
|
|
|
struct flow_cls_offload *cls, bool ingress);
|
|
|
|
int (*cls_flower_del)(struct dsa_switch *ds, int port,
|
|
|
|
struct flow_cls_offload *cls, bool ingress);
|
|
|
|
int (*cls_flower_stats)(struct dsa_switch *ds, int port,
|
|
|
|
struct flow_cls_offload *cls, bool ingress);
|
2017-01-31 03:41:40 +07:00
|
|
|
int (*port_mirror_add)(struct dsa_switch *ds, int port,
|
|
|
|
struct dsa_mall_mirror_tc_entry *mirror,
|
|
|
|
bool ingress);
|
|
|
|
void (*port_mirror_del)(struct dsa_switch *ds, int port,
|
|
|
|
struct dsa_mall_mirror_tc_entry *mirror);
|
2020-03-29 18:51:59 +07:00
|
|
|
int (*port_policer_add)(struct dsa_switch *ds, int port,
|
|
|
|
struct dsa_mall_policer_tc_entry *policer);
|
|
|
|
void (*port_policer_del)(struct dsa_switch *ds, int port);
|
2019-09-15 08:59:59 +07:00
|
|
|
int (*port_setup_tc)(struct dsa_switch *ds, int port,
|
|
|
|
enum tc_setup_type type, void *type_data);
|
2017-03-31 04:37:14 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Cross-chip operations
|
|
|
|
*/
|
|
|
|
int (*crosschip_bridge_join)(struct dsa_switch *ds, int sw_index,
|
|
|
|
int port, struct net_device *br);
|
|
|
|
void (*crosschip_bridge_leave)(struct dsa_switch *ds, int sw_index,
|
|
|
|
int port, struct net_device *br);
|
2018-02-14 07:07:48 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* PTP functionality
|
|
|
|
*/
|
|
|
|
int (*port_hwtstamp_get)(struct dsa_switch *ds, int port,
|
|
|
|
struct ifreq *ifr);
|
|
|
|
int (*port_hwtstamp_set)(struct dsa_switch *ds, int port,
|
|
|
|
struct ifreq *ifr);
|
2018-02-14 07:07:49 +07:00
|
|
|
bool (*port_txtstamp)(struct dsa_switch *ds, int port,
|
|
|
|
struct sk_buff *clone, unsigned int type);
|
|
|
|
bool (*port_rxtstamp)(struct dsa_switch *ds, int port,
|
|
|
|
struct sk_buff *skb, unsigned int type);
|
2019-05-05 17:19:25 +07:00
|
|
|
|
2019-10-25 06:03:51 +07:00
|
|
|
/* Devlink parameters */
|
|
|
|
int (*devlink_param_get)(struct dsa_switch *ds, u32 id,
|
|
|
|
struct devlink_param_gset_ctx *ctx);
|
|
|
|
int (*devlink_param_set)(struct dsa_switch *ds, u32 id,
|
|
|
|
struct devlink_param_gset_ctx *ctx);
|
net: dsa: configure the MTU for switch ports
It is useful be able to configure port policers on a switch to accept
frames of various sizes:
- Increase the MTU for better throughput from the default of 1500 if it
is known that there is no 10/100 Mbps device in the network.
- Decrease the MTU to limit the latency of high-priority frames under
congestion, or work around various network segments that add extra
headers to packets which can't be fragmented.
For DSA slave ports, this is mostly a pass-through callback, called
through the regular ndo ops and at probe time (to ensure consistency
across all supported switches).
The CPU port is called with an MTU equal to the largest configured MTU
of the slave ports. The assumption is that the user might want to
sustain a bidirectional conversation with a partner over any switch
port.
The DSA master is configured the same as the CPU port, plus the tagger
overhead. Since the MTU is by definition L2 payload (sans Ethernet
header), it is up to each individual driver to figure out if it needs to
do anything special for its frame tags on the CPU port (it shouldn't
except in special cases). So the MTU does not contain the tagger
overhead on the CPU port.
However the MTU of the DSA master, minus the tagger overhead, is used as
a proxy for the MTU of the CPU port, which does not have a net device.
This is to avoid uselessly calling the .change_mtu function on the CPU
port when nothing should change.
So it is safe to assume that the DSA master and the CPU port MTUs are
apart by exactly the tagger's overhead in bytes.
Some changes were made around dsa_master_set_mtu(), function which was
now removed, for 2 reasons:
- dev_set_mtu() already calls dev_validate_mtu(), so it's redundant to
do the same thing in DSA
- __dev_set_mtu() returns 0 if ops->ndo_change_mtu is an absent method
That is to say, there's no need for this function in DSA, we can safely
call dev_set_mtu() directly, take the rtnl lock when necessary, and just
propagate whatever errors get reported (since the user probably wants to
be informed).
Some inspiration (mainly in the MTU DSA notifier) was taken from a
vaguely similar patch from Murali and Florian, who are credited as
co-developers down below.
Co-developed-by: Murali Krishna Policharla <murali.policharla@broadcom.com>
Signed-off-by: Murali Krishna Policharla <murali.policharla@broadcom.com>
Co-developed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-28 02:55:42 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* MTU change functionality. Switches can also adjust their MRU through
|
|
|
|
* this method. By MTU, one understands the SDU (L2 payload) length.
|
|
|
|
* If the switch needs to account for the DSA tag on the CPU port, this
|
|
|
|
* method needs to to do so privately.
|
|
|
|
*/
|
|
|
|
int (*port_change_mtu)(struct dsa_switch *ds, int port,
|
|
|
|
int new_mtu);
|
|
|
|
int (*port_max_mtu)(struct dsa_switch *ds, int port);
|
2019-10-25 06:03:51 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
#define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \
|
|
|
|
DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes, \
|
|
|
|
dsa_devlink_param_get, dsa_devlink_param_set, NULL)
|
|
|
|
|
|
|
|
int dsa_devlink_param_get(struct devlink *dl, u32 id,
|
|
|
|
struct devlink_param_gset_ctx *ctx);
|
|
|
|
int dsa_devlink_param_set(struct devlink *dl, u32 id,
|
|
|
|
struct devlink_param_gset_ctx *ctx);
|
|
|
|
int dsa_devlink_params_register(struct dsa_switch *ds,
|
|
|
|
const struct devlink_param *params,
|
|
|
|
size_t params_count);
|
|
|
|
void dsa_devlink_params_unregister(struct dsa_switch *ds,
|
|
|
|
const struct devlink_param *params,
|
|
|
|
size_t params_count);
|
2019-11-05 07:12:57 +07:00
|
|
|
int dsa_devlink_resource_register(struct dsa_switch *ds,
|
|
|
|
const char *resource_name,
|
|
|
|
u64 resource_size,
|
|
|
|
u64 resource_id,
|
|
|
|
u64 parent_resource_id,
|
|
|
|
const struct devlink_resource_size_params *size_params);
|
|
|
|
|
|
|
|
void dsa_devlink_resources_unregister(struct dsa_switch *ds);
|
|
|
|
|
|
|
|
void dsa_devlink_resource_occ_get_register(struct dsa_switch *ds,
|
|
|
|
u64 resource_id,
|
|
|
|
devlink_resource_occ_get_t *occ_get,
|
|
|
|
void *occ_get_priv);
|
|
|
|
void dsa_devlink_resource_occ_get_unregister(struct dsa_switch *ds,
|
|
|
|
u64 resource_id);
|
net: dsa: introduce a dsa_port_from_netdev public helper
As its implementation shows, this is synonimous with calling
dsa_slave_dev_check followed by dsa_slave_to_port, so it is quite simple
already and provides functionality which is already there.
However there is now a need for these functions outside dsa_priv.h, for
example in drivers that perform mirroring and redirection through
tc-flower offloads (they are given raw access to the flow_cls_offload
structure), where they need to call this function on act->dev.
But simply exporting dsa_slave_to_port would make it non-inline and
would result in an extra function call in the hotpath, as can be seen
for example in sja1105:
Before:
000006dc <sja1105_xmit>:
{
6dc: e92d4ff0 push {r4, r5, r6, r7, r8, r9, sl, fp, lr}
6e0: e1a04000 mov r4, r0
6e4: e591958c ldr r9, [r1, #1420] ; 0x58c <- Inline dsa_slave_to_port
6e8: e1a05001 mov r5, r1
6ec: e24dd004 sub sp, sp, #4
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
6f0: e1c901d8 ldrd r0, [r9, #24]
6f4: ebfffffe bl 0 <dsa_8021q_tx_vid>
6f4: R_ARM_CALL dsa_8021q_tx_vid
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
6f8: e1d416b0 ldrh r1, [r4, #96] ; 0x60
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
6fc: e1a08000 mov r8, r0
After:
000006e4 <sja1105_xmit>:
{
6e4: e92d4ff0 push {r4, r5, r6, r7, r8, r9, sl, fp, lr}
6e8: e1a04000 mov r4, r0
6ec: e24dd004 sub sp, sp, #4
struct dsa_port *dp = dsa_slave_to_port(netdev);
6f0: e1a00001 mov r0, r1
{
6f4: e1a05001 mov r5, r1
struct dsa_port *dp = dsa_slave_to_port(netdev);
6f8: ebfffffe bl 0 <dsa_slave_to_port>
6f8: R_ARM_CALL dsa_slave_to_port
6fc: e1a09000 mov r9, r0
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
700: e1c001d8 ldrd r0, [r0, #24]
704: ebfffffe bl 0 <dsa_8021q_tx_vid>
704: R_ARM_CALL dsa_8021q_tx_vid
Because we want to avoid possible performance regressions, introduce
this new function which is designed to be public.
Suggested-by: Vivien Didelot <vivien.didelot@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06 02:20:52 +07:00
|
|
|
struct dsa_port *dsa_port_from_netdev(struct net_device *netdev);
|
2019-11-05 07:12:57 +07:00
|
|
|
|
2019-10-25 06:03:51 +07:00
|
|
|
struct dsa_devlink_priv {
|
|
|
|
struct dsa_switch *ds;
|
2011-11-28 00:06:08 +07:00
|
|
|
};
|
|
|
|
|
2017-01-09 05:52:07 +07:00
|
|
|
struct dsa_switch_driver {
|
|
|
|
struct list_head list;
|
2017-01-09 05:52:08 +07:00
|
|
|
const struct dsa_switch_ops *ops;
|
2017-01-09 05:52:07 +07:00
|
|
|
};
|
|
|
|
|
2017-02-05 04:02:42 +07:00
|
|
|
struct net_device *dsa_dev_to_net_device(struct device *dev);
|
2011-11-28 00:06:08 +07:00
|
|
|
|
2017-06-02 03:07:11 +07:00
|
|
|
/* Keep inline for faster access in hot path */
|
2020-05-10 23:37:40 +07:00
|
|
|
static inline bool netdev_uses_dsa(const struct net_device *dev)
|
2017-03-29 04:45:06 +07:00
|
|
|
{
|
|
|
|
#if IS_ENABLED(CONFIG_NET_DSA)
|
2017-06-02 03:07:13 +07:00
|
|
|
return dev->dsa_ptr && dev->dsa_ptr->rcv;
|
2017-03-29 04:45:06 +07:00
|
|
|
#endif
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
net: dsa: Allow drivers to filter packets they can decode source port from
Frames get processed by DSA and redirected to switch port net devices
based on the ETH_P_XDSA multiplexed packet_type handler found by the
network stack when calling eth_type_trans().
The running assumption is that once the DSA .rcv function is called, DSA
is always able to decode the switch tag in order to change the skb->dev
from its master.
However there are tagging protocols (such as the new DSA_TAG_PROTO_SJA1105,
user of DSA_TAG_PROTO_8021Q) where this assumption is not completely
true, since switch tagging piggybacks on the absence of a vlan_filtering
bridge. Moreover, management traffic (BPDU, PTP) for this switch doesn't
rely on switch tagging, but on a different mechanism. So it would make
sense to at least be able to terminate that.
Having DSA receive traffic it can't decode would put it in an impossible
situation: the eth_type_trans() function would invoke the DSA .rcv(),
which could not change skb->dev, then eth_type_trans() would be invoked
again, which again would call the DSA .rcv, and the packet would never
be able to exit the DSA filter and would spiral in a loop until the
whole system dies.
This happens because eth_type_trans() doesn't actually look at the skb
(so as to identify a potential tag) when it deems it as being
ETH_P_XDSA. It just checks whether skb->dev has a DSA private pointer
installed (therefore it's a DSA master) and that there exists a .rcv
callback (everybody except DSA_TAG_PROTO_NONE has that). This is
understandable as there are many switch tags out there, and exhaustively
checking for all of them is far from ideal.
The solution lies in introducing a filtering function for each tagging
protocol. In the absence of a filtering function, all traffic is passed
to the .rcv DSA callback. The tagging protocol should see the filtering
function as a pre-validation that it can decode the incoming skb. The
traffic that doesn't match the filter will bypass the DSA .rcv callback
and be left on the master netdevice, which wasn't previously possible.
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-05 17:19:23 +07:00
|
|
|
static inline bool dsa_can_decode(const struct sk_buff *skb,
|
|
|
|
struct net_device *dev)
|
|
|
|
{
|
|
|
|
#if IS_ENABLED(CONFIG_NET_DSA)
|
|
|
|
return !dev->dsa_ptr->filter || dev->dsa_ptr->filter(skb, dev);
|
|
|
|
#endif
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2016-06-05 02:17:07 +07:00
|
|
|
void dsa_unregister_switch(struct dsa_switch *ds);
|
2017-05-27 05:12:51 +07:00
|
|
|
int dsa_register_switch(struct dsa_switch *ds);
|
2016-08-19 05:30:12 +07:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
|
|
|
int dsa_switch_suspend(struct dsa_switch *ds);
|
|
|
|
int dsa_switch_resume(struct dsa_switch *ds);
|
|
|
|
#else
|
|
|
|
static inline int dsa_switch_suspend(struct dsa_switch *ds)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
static inline int dsa_switch_resume(struct dsa_switch *ds)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_PM_SLEEP */
|
|
|
|
|
2017-10-12 00:57:48 +07:00
|
|
|
enum dsa_notifier_type {
|
|
|
|
DSA_PORT_REGISTER,
|
|
|
|
DSA_PORT_UNREGISTER,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct dsa_notifier_info {
|
|
|
|
struct net_device *dev;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct dsa_notifier_register_info {
|
|
|
|
struct dsa_notifier_info info; /* must be first */
|
|
|
|
struct net_device *master;
|
|
|
|
unsigned int port_number;
|
|
|
|
unsigned int switch_number;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline struct net_device *
|
|
|
|
dsa_notifier_info_to_dev(const struct dsa_notifier_info *info)
|
|
|
|
{
|
|
|
|
return info->dev;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if IS_ENABLED(CONFIG_NET_DSA)
|
|
|
|
int register_dsa_notifier(struct notifier_block *nb);
|
|
|
|
int unregister_dsa_notifier(struct notifier_block *nb);
|
|
|
|
int call_dsa_notifiers(unsigned long val, struct net_device *dev,
|
|
|
|
struct dsa_notifier_info *info);
|
|
|
|
#else
|
|
|
|
static inline int register_dsa_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int unregister_dsa_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int call_dsa_notifiers(unsigned long val, struct net_device *dev,
|
|
|
|
struct dsa_notifier_info *info)
|
|
|
|
{
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-10-12 00:57:49 +07:00
|
|
|
/* Broadcom tag specific helpers to insert and extract queue/port number */
|
|
|
|
#define BRCM_TAG_SET_PORT_QUEUE(p, q) ((p) << 8 | q)
|
|
|
|
#define BRCM_TAG_GET_PORT(v) ((v) >> 8)
|
|
|
|
#define BRCM_TAG_GET_QUEUE(v) ((v) & 0xff)
|
|
|
|
|
2018-04-26 02:12:52 +07:00
|
|
|
|
2019-05-05 17:19:25 +07:00
|
|
|
netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev);
|
2018-04-26 02:12:52 +07:00
|
|
|
int dsa_port_get_phy_strings(struct dsa_port *dp, uint8_t *data);
|
|
|
|
int dsa_port_get_ethtool_phy_stats(struct dsa_port *dp, uint64_t *data);
|
|
|
|
int dsa_port_get_phy_sset_count(struct dsa_port *dp);
|
2018-05-11 03:17:32 +07:00
|
|
|
void dsa_port_phylink_mac_change(struct dsa_switch *ds, int port, bool up);
|
2018-04-26 02:12:52 +07:00
|
|
|
|
2019-04-29 00:37:15 +07:00
|
|
|
struct dsa_tag_driver {
|
|
|
|
const struct dsa_device_ops *ops;
|
|
|
|
struct list_head list;
|
|
|
|
struct module *owner;
|
|
|
|
};
|
|
|
|
|
|
|
|
void dsa_tag_drivers_register(struct dsa_tag_driver *dsa_tag_driver_array[],
|
|
|
|
unsigned int count,
|
|
|
|
struct module *owner);
|
|
|
|
void dsa_tag_drivers_unregister(struct dsa_tag_driver *dsa_tag_driver_array[],
|
|
|
|
unsigned int count);
|
|
|
|
|
|
|
|
#define dsa_tag_driver_module_drivers(__dsa_tag_drivers_array, __count) \
|
|
|
|
static int __init dsa_tag_driver_module_init(void) \
|
|
|
|
{ \
|
|
|
|
dsa_tag_drivers_register(__dsa_tag_drivers_array, __count, \
|
|
|
|
THIS_MODULE); \
|
|
|
|
return 0; \
|
|
|
|
} \
|
|
|
|
module_init(dsa_tag_driver_module_init); \
|
|
|
|
\
|
|
|
|
static void __exit dsa_tag_driver_module_exit(void) \
|
|
|
|
{ \
|
|
|
|
dsa_tag_drivers_unregister(__dsa_tag_drivers_array, __count); \
|
|
|
|
} \
|
|
|
|
module_exit(dsa_tag_driver_module_exit)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* module_dsa_tag_drivers() - Helper macro for registering DSA tag
|
|
|
|
* drivers
|
|
|
|
* @__ops_array: Array of tag driver strucutres
|
|
|
|
*
|
|
|
|
* Helper macro for DSA tag drivers which do not do anything special
|
|
|
|
* in module init/exit. Each module may only use this macro once, and
|
|
|
|
* calling it replaces module_init() and module_exit().
|
|
|
|
*/
|
|
|
|
#define module_dsa_tag_drivers(__ops_array) \
|
|
|
|
dsa_tag_driver_module_drivers(__ops_array, ARRAY_SIZE(__ops_array))
|
|
|
|
|
|
|
|
#define DSA_TAG_DRIVER_NAME(__ops) dsa_tag_driver ## _ ## __ops
|
|
|
|
|
|
|
|
/* Create a static structure we can build a linked list of dsa_tag
|
|
|
|
* drivers
|
|
|
|
*/
|
|
|
|
#define DSA_TAG_DRIVER(__ops) \
|
|
|
|
static struct dsa_tag_driver DSA_TAG_DRIVER_NAME(__ops) = { \
|
|
|
|
.ops = &__ops, \
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* module_dsa_tag_driver() - Helper macro for registering a single DSA tag
|
|
|
|
* driver
|
|
|
|
* @__ops: Single tag driver structures
|
|
|
|
*
|
|
|
|
* Helper macro for DSA tag drivers which do not do anything special
|
|
|
|
* in module init/exit. Each module may only use this macro once, and
|
|
|
|
* calling it replaces module_init() and module_exit().
|
|
|
|
*/
|
|
|
|
#define module_dsa_tag_driver(__ops) \
|
|
|
|
DSA_TAG_DRIVER(__ops); \
|
|
|
|
\
|
|
|
|
static struct dsa_tag_driver *dsa_tag_driver_array[] = { \
|
|
|
|
&DSA_TAG_DRIVER_NAME(__ops) \
|
|
|
|
}; \
|
|
|
|
module_dsa_tag_drivers(dsa_tag_driver_array)
|
net: Distributed Switch Architecture protocol support
Distributed Switch Architecture is a protocol for managing hardware
switch chips. It consists of a set of MII management registers and
commands to configure the switch, and an ethernet header format to
signal which of the ports of the switch a packet was received from
or is intended to be sent to.
The switches that this driver supports are typically embedded in
access points and routers, and a typical setup with a DSA switch
looks something like this:
+-----------+ +-----------+
| | RGMII | |
| +-------+ +------ 1000baseT MDI ("WAN")
| | | 6-port +------ 1000baseT MDI ("LAN1")
| CPU | | ethernet +------ 1000baseT MDI ("LAN2")
| |MIImgmt| switch +------ 1000baseT MDI ("LAN3")
| +-------+ w/5 PHYs +------ 1000baseT MDI ("LAN4")
| | | |
+-----------+ +-----------+
The switch driver presents each port on the switch as a separate
network interface to Linux, polls the switch to maintain software
link state of those ports, forwards MII management interface
accesses to those network interfaces (e.g. as done by ethtool) to
the switch, and exposes the switch's hardware statistics counters
via the appropriate Linux kernel interfaces.
This initial patch supports the MII management interface register
layout of the Marvell 88E6123, 88E6161 and 88E6165 switch chips, and
supports the "Ethertype DSA" packet tagging format.
(There is no officially registered ethertype for the Ethertype DSA
packet format, so we just grab a random one. The ethertype to use
is programmed into the switch, and the switch driver uses the value
of ETH_P_EDSA for this, so this define can be changed at any time in
the future if the one we chose is allocated to another protocol or
if Ethertype DSA gets its own officially registered ethertype, and
everything will continue to work.)
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Nicolas Pitre <nico@marvell.com>
Tested-by: Byron Bradley <byron.bbradley@gmail.com>
Tested-by: Tim Ellis <tim.ellis@mac.com>
Tested-by: Peter van Valderen <linux@ddcrew.com>
Tested-by: Dirk Teurlings <dirk@upexia.nl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-07 20:44:02 +07:00
|
|
|
#endif
|
2019-04-29 00:37:15 +07:00
|
|
|
|