License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifndef __LINUX_RTNETLINK_H
|
|
|
|
#define __LINUX_RTNETLINK_H
|
|
|
|
|
|
|
|
|
2006-03-21 13:23:58 +07:00
|
|
|
#include <linux/mutex.h>
|
2010-11-15 13:01:59 +07:00
|
|
|
#include <linux/netdevice.h>
|
2014-05-13 05:11:20 +07:00
|
|
|
#include <linux/wait.h>
|
2012-10-13 16:46:48 +07:00
|
|
|
#include <uapi/linux/rtnetlink.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-11-20 13:26:51 +07:00
|
|
|
extern int rtnetlink_send(struct sk_buff *skb, struct net *net, u32 pid, u32 group, int echo);
|
|
|
|
extern int rtnl_unicast(struct sk_buff *skb, struct net *net, u32 pid);
|
2009-02-25 14:18:28 +07:00
|
|
|
extern void rtnl_notify(struct sk_buff *skb, struct net *net, u32 pid,
|
|
|
|
u32 group, struct nlmsghdr *nlh, gfp_t flags);
|
2007-11-20 13:26:51 +07:00
|
|
|
extern void rtnl_set_sk_err(struct net *net, u32 group, int error);
|
2005-04-17 05:20:36 +07:00
|
|
|
extern int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics);
|
2006-11-28 00:27:07 +07:00
|
|
|
extern int rtnl_put_cacheinfo(struct sk_buff *skb, struct dst_entry *dst,
|
2012-07-10 19:06:14 +07:00
|
|
|
u32 id, long expires, u32 error);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-10-24 06:02:42 +07:00
|
|
|
void rtmsg_ifinfo(int type, struct net_device *dev, unsigned change, gfp_t flags);
|
2017-10-03 18:53:23 +07:00
|
|
|
void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change,
|
2018-01-25 21:01:39 +07:00
|
|
|
gfp_t flags, int *new_nsid, int new_ifindex);
|
2014-12-04 04:46:24 +07:00
|
|
|
struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
|
2017-05-27 21:14:34 +07:00
|
|
|
unsigned change, u32 event,
|
2018-01-25 21:01:39 +07:00
|
|
|
gfp_t flags, int *new_nsid,
|
|
|
|
int new_ifindex);
|
2014-12-04 04:46:24 +07:00
|
|
|
void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
gfp_t flags);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-03-21 13:23:58 +07:00
|
|
|
/* RTNL is used as a global lock for all changes to network configuration */
|
2005-04-17 05:20:36 +07:00
|
|
|
extern void rtnl_lock(void);
|
|
|
|
extern void rtnl_unlock(void);
|
2006-03-21 13:23:58 +07:00
|
|
|
extern int rtnl_trylock(void);
|
2008-04-24 12:10:48 +07:00
|
|
|
extern int rtnl_is_locked(void);
|
2018-03-15 02:17:20 +07:00
|
|
|
extern int rtnl_lock_killable(void);
|
2014-05-13 05:11:20 +07:00
|
|
|
|
|
|
|
extern wait_queue_head_t netdev_unregistering_wq;
|
net: Introduce net_sem for protection of pernet_list
Currently, the mutex is mostly used to protect pernet operations
list. It orders setup_net() and cleanup_net() with parallel
{un,}register_pernet_operations() calls, so ->exit{,batch} methods
of the same pernet operations are executed for a dying net, as
were used to call ->init methods, even after the net namespace
is unlinked from net_namespace_list in cleanup_net().
But there are several problems with scalability. The first one
is that more than one net can't be created or destroyed
at the same moment on the node. For big machines with many cpus
running many containers it's very sensitive.
The second one is that it's need to synchronize_rcu() after net
is removed from net_namespace_list():
Destroy net_ns:
cleanup_net()
mutex_lock(&net_mutex)
list_del_rcu(&net->list)
synchronize_rcu() <--- Sleep there for ages
list_for_each_entry_reverse(ops, &pernet_list, list)
ops_exit_list(ops, &net_exit_list)
list_for_each_entry_reverse(ops, &pernet_list, list)
ops_free_list(ops, &net_exit_list)
mutex_unlock(&net_mutex)
This primitive is not fast, especially on the systems with many processors
and/or when preemptible RCU is enabled in config. So, all the time, while
cleanup_net() is waiting for RCU grace period, creation of new net namespaces
is not possible, the tasks, who makes it, are sleeping on the same mutex:
Create net_ns:
copy_net_ns()
mutex_lock_killable(&net_mutex) <--- Sleep there for ages
I observed 20-30 seconds hangs of "unshare -n" on ordinary 8-cpu laptop
with preemptible RCU enabled after CRIU tests round is finished.
The solution is to convert net_mutex to the rw_semaphore and add fine grain
locks to really small number of pernet_operations, what really need them.
Then, pernet_operations::init/::exit methods, modifying the net-related data,
will require down_read() locking only, while down_write() will be used
for changing pernet_list (i.e., when modules are being loaded and unloaded).
This gives signify performance increase, after all patch set is applied,
like you may see here:
%for i in {1..10000}; do unshare -n bash -c exit; done
*before*
real 1m40,377s
user 0m9,672s
sys 0m19,928s
*after*
real 0m17,007s
user 0m5,311s
sys 0m11,779
(5.8 times faster)
This patch starts replacing net_mutex to net_sem. It adds rw_semaphore,
describes the variables it protects, and makes to use, where appropriate.
net_mutex is still present, and next patches will kick it out step-by-step.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Andrei Vagin <avagin@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-02-13 16:26:23 +07:00
|
|
|
extern struct rw_semaphore net_sem;
|
2014-05-13 05:11:20 +07:00
|
|
|
|
2010-02-23 08:04:49 +07:00
|
|
|
#ifdef CONFIG_PROVE_LOCKING
|
2015-10-08 20:29:02 +07:00
|
|
|
extern bool lockdep_rtnl_is_held(void);
|
2013-11-26 13:33:52 +07:00
|
|
|
#else
|
2015-10-08 20:29:02 +07:00
|
|
|
static inline bool lockdep_rtnl_is_held(void)
|
2013-11-26 13:33:52 +07:00
|
|
|
{
|
2015-10-08 20:29:02 +07:00
|
|
|
return true;
|
2013-11-26 13:33:52 +07:00
|
|
|
}
|
2010-02-23 08:04:49 +07:00
|
|
|
#endif /* #ifdef CONFIG_PROVE_LOCKING */
|
2006-03-21 13:23:58 +07:00
|
|
|
|
2010-09-09 04:15:32 +07:00
|
|
|
/**
|
|
|
|
* rcu_dereference_rtnl - rcu_dereference with debug checking
|
|
|
|
* @p: The pointer to read, prior to dereferencing
|
|
|
|
*
|
|
|
|
* Do an rcu_dereference(p), but check caller either holds rcu_read_lock()
|
2010-10-05 14:29:48 +07:00
|
|
|
* or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference()
|
2010-09-09 04:15:32 +07:00
|
|
|
*/
|
|
|
|
#define rcu_dereference_rtnl(p) \
|
2011-07-08 19:39:41 +07:00
|
|
|
rcu_dereference_check(p, lockdep_rtnl_is_held())
|
2010-09-09 04:15:32 +07:00
|
|
|
|
2014-09-13 10:08:20 +07:00
|
|
|
/**
|
|
|
|
* rcu_dereference_bh_rtnl - rcu_dereference_bh with debug checking
|
|
|
|
* @p: The pointer to read, prior to dereference
|
|
|
|
*
|
|
|
|
* Do an rcu_dereference_bh(p), but check caller either holds rcu_read_lock_bh()
|
|
|
|
* or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference_bh()
|
|
|
|
*/
|
|
|
|
#define rcu_dereference_bh_rtnl(p) \
|
|
|
|
rcu_dereference_bh_check(p, lockdep_rtnl_is_held())
|
|
|
|
|
2010-09-15 18:07:15 +07:00
|
|
|
/**
|
2010-10-05 14:29:48 +07:00
|
|
|
* rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL
|
2010-09-15 18:07:15 +07:00
|
|
|
* @p: The pointer to read, prior to dereferencing
|
|
|
|
*
|
2010-10-05 14:29:48 +07:00
|
|
|
* Return the value of the specified RCU-protected pointer, but omit
|
2017-10-10 00:37:22 +07:00
|
|
|
* the READ_ONCE(), because caller holds RTNL.
|
2010-09-15 18:07:15 +07:00
|
|
|
*/
|
|
|
|
#define rtnl_dereference(p) \
|
2010-10-05 14:29:48 +07:00
|
|
|
rcu_dereference_protected(p, lockdep_rtnl_is_held())
|
2010-09-15 18:07:15 +07:00
|
|
|
|
2010-10-02 13:11:55 +07:00
|
|
|
static inline struct netdev_queue *dev_ingress_queue(struct net_device *dev)
|
|
|
|
{
|
|
|
|
return rtnl_dereference(dev->ingress_queue);
|
|
|
|
}
|
|
|
|
|
net: use jump label patching for ingress qdisc in __netif_receive_skb_core
Even if we make use of classifier and actions from the egress
path, we're going into handle_ing() executing additional code
on a per-packet cost for ingress qdisc, just to realize that
nothing is attached on ingress.
Instead, this can just be blinded out as a no-op entirely with
the use of a static key. On input fast-path, we already make
use of static keys in various places, e.g. skb time stamping,
in RPS, etc. It makes sense to not waste time when we're assured
that no ingress qdisc is attached anywhere.
Enabling/disabling of that code path is being done via two
helpers, namely net_{inc,dec}_ingress_queue(), that are being
invoked under RTNL mutex when a ingress qdisc is being either
initialized or destructed.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-11 04:07:54 +07:00
|
|
|
struct netdev_queue *dev_ingress_queue_create(struct net_device *dev);
|
|
|
|
|
2015-05-13 23:19:37 +07:00
|
|
|
#ifdef CONFIG_NET_INGRESS
|
net: use jump label patching for ingress qdisc in __netif_receive_skb_core
Even if we make use of classifier and actions from the egress
path, we're going into handle_ing() executing additional code
on a per-packet cost for ingress qdisc, just to realize that
nothing is attached on ingress.
Instead, this can just be blinded out as a no-op entirely with
the use of a static key. On input fast-path, we already make
use of static keys in various places, e.g. skb time stamping,
in RPS, etc. It makes sense to not waste time when we're assured
that no ingress qdisc is attached anywhere.
Enabling/disabling of that code path is being done via two
helpers, namely net_{inc,dec}_ingress_queue(), that are being
invoked under RTNL mutex when a ingress qdisc is being either
initialized or destructed.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-04-11 04:07:54 +07:00
|
|
|
void net_inc_ingress_queue(void);
|
|
|
|
void net_dec_ingress_queue(void);
|
|
|
|
#endif
|
2010-10-02 13:11:55 +07:00
|
|
|
|
net, sched: add clsact qdisc
This work adds a generalization of the ingress qdisc as a qdisc holding
only classifiers. The clsact qdisc works on ingress, but also on egress.
In both cases, it's execution happens without taking the qdisc lock, and
the main difference for the egress part compared to prior version of [1]
is that this can be applied with _any_ underlying real egress qdisc (also
classless ones).
Besides solving the use-case of [1], that is, allowing for more programmability
on assigning skb->priority for the mqprio case that is supported by most
popular 10G+ NICs, it also opens up a lot more flexibility for other tc
applications. The main work on classification can already be done at clsact
egress time if the use-case allows and state stored for later retrieval
f.e. again in skb->priority with major/minors (which is checked by most
classful qdiscs before consulting tc_classify()) and/or in other skb fields
like skb->tc_index for some light-weight post-processing to get to the
eventual classid in case of a classful qdisc. Another use case is that
the clsact egress part allows to have a central egress counterpart to
the ingress classifiers, so that classifiers can easily share state (e.g.
in cls_bpf via eBPF maps) for ingress and egress.
Currently, default setups like mq + pfifo_fast would require for this to
use, for example, prio qdisc instead (to get a tc_classify() run) and to
duplicate the egress classifier for each queue. With clsact, it allows
for leaving the setup as is, it can additionally assign skb->priority to
put the skb in one of pfifo_fast's bands and it can share state with maps.
Moreover, we can access the skb's dst entry (f.e. to retrieve tclassid)
w/o the need to perform a skb_dst_force() to hold on to it any longer. In
lwt case, we can also use this facility to setup dst metadata via cls_bpf
(bpf_skb_set_tunnel_key()) without needing a real egress qdisc just for
that (case of IFF_NO_QUEUE devices, for example).
The realization can be done without any changes to the scheduler core
framework. All it takes is that we have two a-priori defined minors/child
classes, where we can mux between ingress and egress classifier list
(dev->ingress_cl_list and dev->egress_cl_list, latter stored close to
dev->_tx to avoid extra cacheline miss for moderate loads). The egress
part is a bit similar modelled to handle_ing() and patched to a noop in
case the functionality is not used. Both handlers are now called
sch_handle_ingress() and sch_handle_egress(), code sharing among the two
doesn't seem practical as there are various minor differences in both
paths, so that making them conditional in a single handler would rather
slow things down.
Full compatibility to ingress qdisc is provided as well. Since both
piggyback on TC_H_CLSACT, only one of them (ingress/clsact) can exist
per netdevice, and thus ingress qdisc specific behaviour can be retained
for user space. This means, either a user does 'tc qdisc add dev foo ingress'
and configures ingress qdisc as usual, or the 'tc qdisc add dev foo clsact'
alternative, where both, ingress and egress classifier can be configured
as in the below example. ingress qdisc supports attaching classifier to any
minor number whereas clsact has two fixed minors for muxing between the
lists, therefore to not break user space setups, they are better done as
two separate qdiscs.
I decided to extend the sch_ingress module with clsact functionality so
that commonly used code can be reused, the module is being aliased with
sch_clsact so that it can be auto-loaded properly. Alternative would have been
to add a flag when initializing ingress to alter its behaviour plus aliasing
to a different name (as it's more than just ingress). However, the first would
end up, based on the flag, choosing the new/old behaviour by calling different
function implementations to handle each anyway, the latter would require to
register ingress qdisc once again under different alias. So, this really begs
to provide a minimal, cleaner approach to have Qdisc_ops and Qdisc_class_ops
by its own that share callbacks used by both.
Example, adding qdisc:
# tc qdisc add dev foo clsact
# tc qdisc show dev foo
qdisc mq 0: root
qdisc pfifo_fast 0: parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: parent :2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: parent :3 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: parent :4 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc clsact ffff: parent ffff:fff1
Adding filters (deleting, etc works analogous by specifying ingress/egress):
# tc filter add dev foo ingress bpf da obj bar.o sec ingress
# tc filter add dev foo egress bpf da obj bar.o sec egress
# tc filter show dev foo ingress
filter protocol all pref 49152 bpf
filter protocol all pref 49152 bpf handle 0x1 bar.o:[ingress] direct-action
# tc filter show dev foo egress
filter protocol all pref 49152 bpf
filter protocol all pref 49152 bpf handle 0x1 bar.o:[egress] direct-action
A 'tc filter show dev foo' or 'tc filter show dev foo parent ffff:' will
show an empty list for clsact. Either using the parent names (ingress/egress)
or specifying the full major/minor will then show the related filter lists.
Prior work on a mqprio prequeue() facility [1] was done mainly by John Fastabend.
[1] http://patchwork.ozlabs.org/patch/512949/
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-01-08 04:29:47 +07:00
|
|
|
#ifdef CONFIG_NET_EGRESS
|
|
|
|
void net_inc_egress_queue(void);
|
|
|
|
void net_dec_egress_queue(void);
|
|
|
|
#endif
|
|
|
|
|
2016-06-14 10:21:50 +07:00
|
|
|
void rtnetlink_init(void);
|
|
|
|
void __rtnl_unlock(void);
|
|
|
|
void rtnl_kfree_skbs(struct sk_buff *head, struct sk_buff *tail);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-12-21 16:40:04 +07:00
|
|
|
#define ASSERT_RTNL() \
|
|
|
|
WARN_ONCE(!rtnl_is_locked(), \
|
|
|
|
"RTNL: assertion failed at %s (%d)\n", __FILE__, __LINE__)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-04-15 13:43:56 +07:00
|
|
|
extern int ndo_dflt_fdb_dump(struct sk_buff *skb,
|
|
|
|
struct netlink_callback *cb,
|
|
|
|
struct net_device *dev,
|
2014-07-10 18:01:58 +07:00
|
|
|
struct net_device *filter_dev,
|
2016-08-31 11:56:45 +07:00
|
|
|
int *idx);
|
2013-03-06 22:39:42 +07:00
|
|
|
extern int ndo_dflt_fdb_add(struct ndmsg *ndm,
|
|
|
|
struct nlattr *tb[],
|
|
|
|
struct net_device *dev,
|
|
|
|
const unsigned char *addr,
|
2014-11-28 20:34:15 +07:00
|
|
|
u16 vid,
|
|
|
|
u16 flags);
|
2013-03-06 22:39:42 +07:00
|
|
|
extern int ndo_dflt_fdb_del(struct ndmsg *ndm,
|
|
|
|
struct nlattr *tb[],
|
|
|
|
struct net_device *dev,
|
2014-11-28 20:34:15 +07:00
|
|
|
const unsigned char *addr,
|
|
|
|
u16 vid);
|
2012-10-24 15:13:09 +07:00
|
|
|
|
|
|
|
extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
|
2014-11-28 20:34:25 +07:00
|
|
|
struct net_device *dev, u16 mode,
|
2015-06-22 14:27:17 +07:00
|
|
|
u32 flags, u32 mask, int nlflags,
|
|
|
|
u32 filter_mask,
|
|
|
|
int (*vlan_fill)(struct sk_buff *skb,
|
|
|
|
struct net_device *dev,
|
|
|
|
u32 filter_mask));
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif /* __LINUX_RTNETLINK_H */
|