Merge branch 'tipc-next'

Richard Alpe says:

====================
tipc: new netlink API

v3
The old API is not removed.

The new API is separated from the old because of a bug in the old
tipc-config utility using it. When adding commands to the existing
genl_ops struct the get-family response message grows to a point where
it overflows the small receive buffer in tipc-config, subsequently
breaking the tool. Hence the two genl_family and genl_ops structs.

The new headers are placed in a new file called tipc_netlink.h rather
than added to tipc_config.h as they where in previous versions of this
patchset.
/v3

v2
Redesigned "socket list command" to address David Millers comments in
net-next v1 of this patchset.

Simply put the problem is that we can have an arbitrary amount of
sockets with an arbitrary amount of associated publications. In the
previous patchset this was solved by nesting as many publications as
possible into a socket. If all didn't fit it sent the same socket again
with the remaining publications. As David Miller pointed out this makes
each message malformed as the receiver cannot by the data itself know if
it has received a complete set or not. This was flagged outside of the
data and the client did the reassembly.

o socket 1
  o publ 1
  o publ 2
o socket 1
  o publ 3
  o publ 4

In this patchset this is divided into socket listing and publication
listing to avoid having nested data of arbitrary size.

TIPC_NL_SOCK_GET now dumps all sockets with any nested connection
information. However, it no longer include publication information,
only a HAS_PUBL flag to indicate whether the socket has publications or
not. To compliment this there is a new command TIPC_NL_PUBL_GET which
takes a socket as argument and dumps all associated publications.

This means that on "top-level" the data is always complete. In the case
of "tipc socket list" (new tipc-config -p) it first queries all sockets
with TIPC_NL_SOCK_GET and if the socket is published it fetches the
publications using TIPC_NL_PUBL_GET. This is slow for large amount of
sockets with a low publication count (worst case). However, the
integrity is preserved and there is no malformed messages.
/v2

This is a new netlink API for TIPC. It's intended to replace the
existing ASCII API. It utilizes many of the standard netlink
functionalities in the kernel, such as attribute nesting and
input polices.

There are a couple of reasons for this rewrite. The main and most
easily justifiable is that the existing API doesn't scale.  Meaning
that a TIPC cluster with a larger amount of nodes, publications or
ports will rapidly exceed what the exiting API can handle. Resulting
in truncated or corrupt responses. In addition to this, the existing
ASCII API rarely uses "standard" kernel functions and has several
tipc specific functions for sanity checking and string formating.

The new API utilizes standard function for pushing data to socket
buffers and netlink attribute nesting to logically group data.
The new API can handle an arbitrary amount of data for things that
are likely to scale up as the TIPC usage and/or cluster size
increases.

A new user-space tool has been developed to work with this new API.
It is called "tipc" and is part of the "tipc-utils" package that
comes with many Linux distributions.  The new "tipc" tool utilizes
standard functions from libnl to format, send, receive and process
messages. The tool has borrowed design philosophies from git and the
ip tool. Making the syntax resemble that of ip whiles its strong
modularity resembles that of git.

The existing tool for managing TIPC, "tipc-config" remains in the
package, but when built for kernels that has this new API it is
replaced by a script-based wrapper that maps the old syntax to the
new tool. This way, backwards compatibility is mostly preserved.

MORE ABOUT THE CODE

The main challenge here is to handle the case where the data is of
arbitrary size. This was largely neglected in the old API design.
For example when there is a lot of sockets that has a large amount of
associated publications. In this specific case we can't assume that
all ports nor for that matter all the publications can fit inside a
single netlink message. Sending everything in one batch isn't an
option as we need to yield for the socket layer to cope.

This is solved by using the standard netlink callback for dumping
data and releasing the locks when the netlink message is full. The
dumping mechanism gets us back and we keep a reference (logical) to
where we where when the message became full. This means that we are
not "atomic", what is retrieved by user-space isn't a snapshot at a
certain time but rather a continuously updated data set. In the case
where we can't find our way back i.e. our logical reference are gone
we set a standard flag (NLM_F_DUMP_INTR) to tell user-space that the
dump was interrupted.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2014-11-21 15:01:35 -05:00
commit 49ed2617a0
18 changed files with 2103 additions and 7 deletions

View File

@ -0,0 +1,244 @@
/*
* Copyright (c) 2014, Ericsson AB
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the names of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _LINUX_TIPC_NETLINK_H_
#define _LINUX_TIPC_NETLINK_H_
#define TIPC_GENL_V2_NAME "TIPCv2"
#define TIPC_GENL_V2_VERSION 0x1
/* Netlink commands */
enum {
TIPC_NL_UNSPEC,
TIPC_NL_LEGACY,
TIPC_NL_BEARER_DISABLE,
TIPC_NL_BEARER_ENABLE,
TIPC_NL_BEARER_GET,
TIPC_NL_BEARER_SET,
TIPC_NL_SOCK_GET,
TIPC_NL_PUBL_GET,
TIPC_NL_LINK_GET,
TIPC_NL_LINK_SET,
TIPC_NL_LINK_RESET_STATS,
TIPC_NL_MEDIA_GET,
TIPC_NL_MEDIA_SET,
TIPC_NL_NODE_GET,
TIPC_NL_NET_GET,
TIPC_NL_NET_SET,
TIPC_NL_NAME_TABLE_GET,
__TIPC_NL_CMD_MAX,
TIPC_NL_CMD_MAX = __TIPC_NL_CMD_MAX - 1
};
/* Top level netlink attributes */
enum {
TIPC_NLA_UNSPEC,
TIPC_NLA_BEARER, /* nest */
TIPC_NLA_SOCK, /* nest */
TIPC_NLA_PUBL, /* nest */
TIPC_NLA_LINK, /* nest */
TIPC_NLA_MEDIA, /* nest */
TIPC_NLA_NODE, /* nest */
TIPC_NLA_NET, /* nest */
TIPC_NLA_NAME_TABLE, /* nest */
__TIPC_NLA_MAX,
TIPC_NLA_MAX = __TIPC_NLA_MAX - 1
};
/* Bearer info */
enum {
TIPC_NLA_BEARER_UNSPEC,
TIPC_NLA_BEARER_NAME, /* string */
TIPC_NLA_BEARER_PROP, /* nest */
TIPC_NLA_BEARER_DOMAIN, /* u32 */
__TIPC_NLA_BEARER_MAX,
TIPC_NLA_BEARER_MAX = __TIPC_NLA_BEARER_MAX - 1
};
/* Socket info */
enum {
TIPC_NLA_SOCK_UNSPEC,
TIPC_NLA_SOCK_ADDR, /* u32 */
TIPC_NLA_SOCK_REF, /* u32 */
TIPC_NLA_SOCK_CON, /* nest */
TIPC_NLA_SOCK_HAS_PUBL, /* flag */
__TIPC_NLA_SOCK_MAX,
TIPC_NLA_SOCK_MAX = __TIPC_NLA_SOCK_MAX - 1
};
/* Link info */
enum {
TIPC_NLA_LINK_UNSPEC,
TIPC_NLA_LINK_NAME, /* string */
TIPC_NLA_LINK_DEST, /* u32 */
TIPC_NLA_LINK_MTU, /* u32 */
TIPC_NLA_LINK_BROADCAST, /* flag */
TIPC_NLA_LINK_UP, /* flag */
TIPC_NLA_LINK_ACTIVE, /* flag */
TIPC_NLA_LINK_PROP, /* nest */
TIPC_NLA_LINK_STATS, /* nest */
TIPC_NLA_LINK_RX, /* u32 */
TIPC_NLA_LINK_TX, /* u32 */
__TIPC_NLA_LINK_MAX,
TIPC_NLA_LINK_MAX = __TIPC_NLA_LINK_MAX - 1
};
/* Media info */
enum {
TIPC_NLA_MEDIA_UNSPEC,
TIPC_NLA_MEDIA_NAME, /* string */
TIPC_NLA_MEDIA_PROP, /* nest */
__TIPC_NLA_MEDIA_MAX,
TIPC_NLA_MEDIA_MAX = __TIPC_NLA_MEDIA_MAX - 1
};
/* Node info */
enum {
TIPC_NLA_NODE_UNSPEC,
TIPC_NLA_NODE_ADDR, /* u32 */
TIPC_NLA_NODE_UP, /* flag */
__TIPC_NLA_NODE_MAX,
TIPC_NLA_NODE_MAX = __TIPC_NLA_NODE_MAX - 1
};
/* Net info */
enum {
TIPC_NLA_NET_UNSPEC,
TIPC_NLA_NET_ID, /* u32 */
TIPC_NLA_NET_ADDR, /* u32 */
__TIPC_NLA_NET_MAX,
TIPC_NLA_NET_MAX = __TIPC_NLA_NET_MAX - 1
};
/* Name table info */
enum {
TIPC_NLA_NAME_TABLE_UNSPEC,
TIPC_NLA_NAME_TABLE_PUBL, /* nest */
__TIPC_NLA_NAME_TABLE_MAX,
TIPC_NLA_NAME_TABLE_MAX = __TIPC_NLA_NAME_TABLE_MAX - 1
};
/* Publication info */
enum {
TIPC_NLA_PUBL_UNSPEC,
TIPC_NLA_PUBL_TYPE, /* u32 */
TIPC_NLA_PUBL_LOWER, /* u32 */
TIPC_NLA_PUBL_UPPER, /* u32 */
TIPC_NLA_PUBL_SCOPE, /* u32 */
TIPC_NLA_PUBL_NODE, /* u32 */
TIPC_NLA_PUBL_REF, /* u32 */
TIPC_NLA_PUBL_KEY, /* u32 */
__TIPC_NLA_PUBL_MAX,
TIPC_NLA_PUBL_MAX = __TIPC_NLA_PUBL_MAX - 1
};
/* Nest, connection info */
enum {
TIPC_NLA_CON_UNSPEC,
TIPC_NLA_CON_FLAG, /* flag */
TIPC_NLA_CON_NODE, /* u32 */
TIPC_NLA_CON_SOCK, /* u32 */
TIPC_NLA_CON_TYPE, /* u32 */
TIPC_NLA_CON_INST, /* u32 */
__TIPC_NLA_CON_MAX,
TIPC_NLA_CON_MAX = __TIPC_NLA_CON_MAX - 1
};
/* Nest, link propreties. Valid for link, media and bearer */
enum {
TIPC_NLA_PROP_UNSPEC,
TIPC_NLA_PROP_PRIO, /* u32 */
TIPC_NLA_PROP_TOL, /* u32 */
TIPC_NLA_PROP_WIN, /* u32 */
__TIPC_NLA_PROP_MAX,
TIPC_NLA_PROP_MAX = __TIPC_NLA_PROP_MAX - 1
};
/* Nest, statistics info */
enum {
TIPC_NLA_STATS_UNSPEC,
TIPC_NLA_STATS_RX_INFO, /* u32 */
TIPC_NLA_STATS_RX_FRAGMENTS, /* u32 */
TIPC_NLA_STATS_RX_FRAGMENTED, /* u32 */
TIPC_NLA_STATS_RX_BUNDLES, /* u32 */
TIPC_NLA_STATS_RX_BUNDLED, /* u32 */
TIPC_NLA_STATS_TX_INFO, /* u32 */
TIPC_NLA_STATS_TX_FRAGMENTS, /* u32 */
TIPC_NLA_STATS_TX_FRAGMENTED, /* u32 */
TIPC_NLA_STATS_TX_BUNDLES, /* u32 */
TIPC_NLA_STATS_TX_BUNDLED, /* u32 */
TIPC_NLA_STATS_MSG_PROF_TOT, /* u32 */
TIPC_NLA_STATS_MSG_LEN_CNT, /* u32 */
TIPC_NLA_STATS_MSG_LEN_TOT, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P0, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P1, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P2, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P3, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P4, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P5, /* u32 */
TIPC_NLA_STATS_MSG_LEN_P6, /* u32 */
TIPC_NLA_STATS_RX_STATES, /* u32 */
TIPC_NLA_STATS_RX_PROBES, /* u32 */
TIPC_NLA_STATS_RX_NACKS, /* u32 */
TIPC_NLA_STATS_RX_DEFERRED, /* u32 */
TIPC_NLA_STATS_TX_STATES, /* u32 */
TIPC_NLA_STATS_TX_PROBES, /* u32 */
TIPC_NLA_STATS_TX_NACKS, /* u32 */
TIPC_NLA_STATS_TX_ACKS, /* u32 */
TIPC_NLA_STATS_RETRANSMITTED, /* u32 */
TIPC_NLA_STATS_DUPLICATES, /* u32 */
TIPC_NLA_STATS_LINK_CONGS, /* u32 */
TIPC_NLA_STATS_MAX_QUEUE, /* u32 */
TIPC_NLA_STATS_AVG_QUEUE, /* u32 */
__TIPC_NLA_STATS_MAX,
TIPC_NLA_STATS_MAX = __TIPC_NLA_STATS_MAX - 1
};
#endif

View File

@ -767,6 +767,117 @@ void tipc_bcbearer_sort(struct tipc_node_map *nm_ptr, u32 node, bool action)
tipc_bclink_unlock();
}
int __tipc_nl_add_bc_link_stat(struct sk_buff *skb, struct tipc_stats *stats)
{
int i;
struct nlattr *nest;
struct nla_map {
__u32 key;
__u32 val;
};
struct nla_map map[] = {
{TIPC_NLA_STATS_RX_INFO, stats->recv_info},
{TIPC_NLA_STATS_RX_FRAGMENTS, stats->recv_fragments},
{TIPC_NLA_STATS_RX_FRAGMENTED, stats->recv_fragmented},
{TIPC_NLA_STATS_RX_BUNDLES, stats->recv_bundles},
{TIPC_NLA_STATS_RX_BUNDLED, stats->recv_bundled},
{TIPC_NLA_STATS_TX_INFO, stats->sent_info},
{TIPC_NLA_STATS_TX_FRAGMENTS, stats->sent_fragments},
{TIPC_NLA_STATS_TX_FRAGMENTED, stats->sent_fragmented},
{TIPC_NLA_STATS_TX_BUNDLES, stats->sent_bundles},
{TIPC_NLA_STATS_TX_BUNDLED, stats->sent_bundled},
{TIPC_NLA_STATS_RX_NACKS, stats->recv_nacks},
{TIPC_NLA_STATS_RX_DEFERRED, stats->deferred_recv},
{TIPC_NLA_STATS_TX_NACKS, stats->sent_nacks},
{TIPC_NLA_STATS_TX_ACKS, stats->sent_acks},
{TIPC_NLA_STATS_RETRANSMITTED, stats->retransmitted},
{TIPC_NLA_STATS_DUPLICATES, stats->duplicates},
{TIPC_NLA_STATS_LINK_CONGS, stats->link_congs},
{TIPC_NLA_STATS_MAX_QUEUE, stats->max_queue_sz},
{TIPC_NLA_STATS_AVG_QUEUE, stats->queue_sz_counts ?
(stats->accu_queue_sz / stats->queue_sz_counts) : 0}
};
nest = nla_nest_start(skb, TIPC_NLA_LINK_STATS);
if (!nest)
return -EMSGSIZE;
for (i = 0; i < ARRAY_SIZE(map); i++)
if (nla_put_u32(skb, map[i].key, map[i].val))
goto msg_full;
nla_nest_end(skb, nest);
return 0;
msg_full:
nla_nest_cancel(skb, nest);
return -EMSGSIZE;
}
int tipc_nl_add_bc_link(struct tipc_nl_msg *msg)
{
int err;
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
if (!bcl)
return 0;
tipc_bclink_lock();
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_LINK_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_LINK);
if (!attrs)
goto msg_full;
/* The broadcast link is always up */
if (nla_put_flag(msg->skb, TIPC_NLA_LINK_UP))
goto attr_msg_full;
if (nla_put_flag(msg->skb, TIPC_NLA_LINK_BROADCAST))
goto attr_msg_full;
if (nla_put_string(msg->skb, TIPC_NLA_LINK_NAME, bcl->name))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, bcl->next_in_no))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, bcl->next_out_no))
goto attr_msg_full;
prop = nla_nest_start(msg->skb, TIPC_NLA_LINK_PROP);
if (!prop)
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_WIN, bcl->queue_limit[0]))
goto prop_msg_full;
nla_nest_end(msg->skb, prop);
err = __tipc_nl_add_bc_link_stat(msg->skb, &bcl->stats);
if (err)
goto attr_msg_full;
tipc_bclink_unlock();
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
prop_msg_full:
nla_nest_cancel(msg->skb, prop);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
tipc_bclink_unlock();
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int tipc_bclink_stats(char *buf, const u32 buf_size)
{

View File

@ -37,6 +37,8 @@
#ifndef _TIPC_BCAST_H
#define _TIPC_BCAST_H
#include "netlink.h"
#define MAX_NODES 4096
#define WSIZE 32
#define TIPC_BCLINK_RESET 1
@ -100,4 +102,6 @@ void tipc_bcbearer_sort(struct tipc_node_map *nm_ptr, u32 node, bool action);
uint tipc_bclink_get_mtu(void);
int tipc_bclink_xmit(struct sk_buff *buf);
void tipc_bclink_wakeup_users(void);
int tipc_nl_add_bc_link(struct tipc_nl_msg *msg);
#endif

View File

@ -1,7 +1,7 @@
/*
* net/tipc/bearer.c: TIPC bearer code
*
* Copyright (c) 1996-2006, 2013, Ericsson AB
* Copyright (c) 1996-2006, 2013-2014, Ericsson AB
* Copyright (c) 2004-2006, 2010-2013, Wind River Systems
* All rights reserved.
*
@ -37,6 +37,7 @@
#include "core.h"
#include "config.h"
#include "bearer.h"
#include "link.h"
#include "discover.h"
#define MAX_ADDR_STR 60
@ -49,6 +50,23 @@ static struct tipc_media * const media_info_array[] = {
NULL
};
static const struct nla_policy
tipc_nl_bearer_policy[TIPC_NLA_BEARER_MAX + 1] = {
[TIPC_NLA_BEARER_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_BEARER_NAME] = {
.type = NLA_STRING,
.len = TIPC_MAX_BEARER_NAME
},
[TIPC_NLA_BEARER_PROP] = { .type = NLA_NESTED },
[TIPC_NLA_BEARER_DOMAIN] = { .type = NLA_U32 }
};
static const struct nla_policy tipc_nl_media_policy[TIPC_NLA_MEDIA_MAX + 1] = {
[TIPC_NLA_MEDIA_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_MEDIA_NAME] = { .type = NLA_STRING },
[TIPC_NLA_MEDIA_PROP] = { .type = NLA_NESTED }
};
struct tipc_bearer __rcu *bearer_list[MAX_BEARERS + 1];
static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down);
@ -627,3 +645,428 @@ void tipc_bearer_stop(void)
}
}
}
/* Caller should hold rtnl_lock to protect the bearer */
int __tipc_nl_add_bearer(struct tipc_nl_msg *msg, struct tipc_bearer *bearer)
{
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_BEARER_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_BEARER);
if (!attrs)
goto msg_full;
if (nla_put_string(msg->skb, TIPC_NLA_BEARER_NAME, bearer->name))
goto attr_msg_full;
prop = nla_nest_start(msg->skb, TIPC_NLA_BEARER_PROP);
if (!prop)
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_PRIO, bearer->priority))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_TOL, bearer->tolerance))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_WIN, bearer->window))
goto prop_msg_full;
nla_nest_end(msg->skb, prop);
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
prop_msg_full:
nla_nest_cancel(msg->skb, prop);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int tipc_nl_bearer_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
int i = cb->args[0];
struct tipc_bearer *bearer;
struct tipc_nl_msg msg;
if (i == MAX_BEARERS)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
rtnl_lock();
for (i = 0; i < MAX_BEARERS; i++) {
bearer = rtnl_dereference(bearer_list[i]);
if (!bearer)
continue;
err = __tipc_nl_add_bearer(&msg, bearer);
if (err)
break;
}
rtnl_unlock();
cb->args[0] = i;
return skb->len;
}
int tipc_nl_bearer_get(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *name;
struct sk_buff *rep;
struct tipc_bearer *bearer;
struct tipc_nl_msg msg;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_BEARER_MAX,
info->attrs[TIPC_NLA_BEARER],
tipc_nl_bearer_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_BEARER_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
rep = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
if (!rep)
return -ENOMEM;
msg.skb = rep;
msg.portid = info->snd_portid;
msg.seq = info->snd_seq;
rtnl_lock();
bearer = tipc_bearer_find(name);
if (!bearer) {
err = -EINVAL;
goto err_out;
}
err = __tipc_nl_add_bearer(&msg, bearer);
if (err)
goto err_out;
rtnl_unlock();
return genlmsg_reply(rep, info);
err_out:
rtnl_unlock();
nlmsg_free(rep);
return err;
}
int tipc_nl_bearer_disable(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *name;
struct tipc_bearer *bearer;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_BEARER_MAX,
info->attrs[TIPC_NLA_BEARER],
tipc_nl_bearer_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_BEARER_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
rtnl_lock();
bearer = tipc_bearer_find(name);
if (!bearer) {
rtnl_unlock();
return -EINVAL;
}
bearer_disable(bearer, false);
rtnl_unlock();
return 0;
}
int tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *bearer;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
u32 domain;
u32 prio;
prio = TIPC_MEDIA_LINK_PRI;
domain = tipc_own_addr & TIPC_CLUSTER_MASK;
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_BEARER_MAX,
info->attrs[TIPC_NLA_BEARER],
tipc_nl_bearer_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_BEARER_NAME])
return -EINVAL;
bearer = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
if (attrs[TIPC_NLA_BEARER_DOMAIN])
domain = nla_get_u32(attrs[TIPC_NLA_BEARER_DOMAIN]);
if (attrs[TIPC_NLA_BEARER_PROP]) {
struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
err = tipc_nl_parse_link_prop(attrs[TIPC_NLA_BEARER_PROP],
props);
if (err)
return err;
if (props[TIPC_NLA_PROP_PRIO])
prio = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
}
rtnl_lock();
err = tipc_enable_bearer(bearer, domain, prio);
if (err) {
rtnl_unlock();
return err;
}
rtnl_unlock();
return 0;
}
int tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *name;
struct tipc_bearer *b;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_BEARER_MAX,
info->attrs[TIPC_NLA_BEARER],
tipc_nl_bearer_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_BEARER_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
rtnl_lock();
b = tipc_bearer_find(name);
if (!b) {
rtnl_unlock();
return -EINVAL;
}
if (attrs[TIPC_NLA_BEARER_PROP]) {
struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
err = tipc_nl_parse_link_prop(attrs[TIPC_NLA_BEARER_PROP],
props);
if (err) {
rtnl_unlock();
return err;
}
if (props[TIPC_NLA_PROP_TOL])
b->tolerance = nla_get_u32(props[TIPC_NLA_PROP_TOL]);
if (props[TIPC_NLA_PROP_PRIO])
b->priority = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
if (props[TIPC_NLA_PROP_WIN])
b->window = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
}
rtnl_unlock();
return 0;
}
int __tipc_nl_add_media(struct tipc_nl_msg *msg, struct tipc_media *media)
{
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_MEDIA_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_MEDIA);
if (!attrs)
goto msg_full;
if (nla_put_string(msg->skb, TIPC_NLA_MEDIA_NAME, media->name))
goto attr_msg_full;
prop = nla_nest_start(msg->skb, TIPC_NLA_MEDIA_PROP);
if (!prop)
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_PRIO, media->priority))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_TOL, media->tolerance))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_WIN, media->window))
goto prop_msg_full;
nla_nest_end(msg->skb, prop);
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
prop_msg_full:
nla_nest_cancel(msg->skb, prop);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int tipc_nl_media_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
int i = cb->args[0];
struct tipc_nl_msg msg;
if (i == MAX_MEDIA)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
rtnl_lock();
for (; media_info_array[i] != NULL; i++) {
err = __tipc_nl_add_media(&msg, media_info_array[i]);
if (err)
break;
}
rtnl_unlock();
cb->args[0] = i;
return skb->len;
}
int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *name;
struct tipc_nl_msg msg;
struct tipc_media *media;
struct sk_buff *rep;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
if (!info->attrs[TIPC_NLA_MEDIA])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_MEDIA_MAX,
info->attrs[TIPC_NLA_MEDIA],
tipc_nl_media_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_MEDIA_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_MEDIA_NAME]);
rep = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
if (!rep)
return -ENOMEM;
msg.skb = rep;
msg.portid = info->snd_portid;
msg.seq = info->snd_seq;
rtnl_lock();
media = tipc_media_find(name);
if (!media) {
err = -EINVAL;
goto err_out;
}
err = __tipc_nl_add_media(&msg, media);
if (err)
goto err_out;
rtnl_unlock();
return genlmsg_reply(rep, info);
err_out:
rtnl_unlock();
nlmsg_free(rep);
return err;
}
int tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *name;
struct tipc_media *m;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
if (!info->attrs[TIPC_NLA_MEDIA])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_MEDIA_MAX,
info->attrs[TIPC_NLA_MEDIA],
tipc_nl_media_policy);
if (!attrs[TIPC_NLA_MEDIA_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_MEDIA_NAME]);
rtnl_lock();
m = tipc_media_find(name);
if (!m) {
rtnl_unlock();
return -EINVAL;
}
if (attrs[TIPC_NLA_MEDIA_PROP]) {
struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
err = tipc_nl_parse_link_prop(attrs[TIPC_NLA_MEDIA_PROP],
props);
if (err) {
rtnl_unlock();
return err;
}
if (props[TIPC_NLA_PROP_TOL])
m->tolerance = nla_get_u32(props[TIPC_NLA_PROP_TOL]);
if (props[TIPC_NLA_PROP_PRIO])
m->priority = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
if (props[TIPC_NLA_PROP_WIN])
m->window = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
}
rtnl_unlock();
return 0;
}

View File

@ -1,7 +1,7 @@
/*
* net/tipc/bearer.h: Include file for TIPC bearer code
*
* Copyright (c) 1996-2006, 2013, Ericsson AB
* Copyright (c) 1996-2006, 2013-2014, Ericsson AB
* Copyright (c) 2005, 2010-2011, Wind River Systems
* All rights reserved.
*
@ -38,6 +38,8 @@
#define _TIPC_BEARER_H
#include "bcast.h"
#include "netlink.h"
#include <net/genetlink.h>
#define MAX_BEARERS 2
#define MAX_MEDIA 2
@ -176,6 +178,16 @@ extern struct tipc_media eth_media_info;
extern struct tipc_media ib_media_info;
#endif
int tipc_nl_bearer_disable(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_bearer_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_bearer_get(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_media_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info);
int tipc_media_set_priority(const char *name, u32 new_value);
int tipc_media_set_window(const char *name, u32 new_value);
void tipc_media_addr_printf(char *buf, int len, struct tipc_media_addr *a);

View File

@ -41,6 +41,7 @@
#include <linux/tipc.h>
#include <linux/tipc_config.h>
#include <linux/tipc_netlink.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/errno.h>

View File

@ -36,10 +36,12 @@
#include "core.h"
#include "link.h"
#include "bcast.h"
#include "socket.h"
#include "name_distr.h"
#include "discover.h"
#include "config.h"
#include "netlink.h"
#include <linux/pkt_sched.h>
@ -50,6 +52,30 @@ static const char *link_co_err = "Link changeover error, ";
static const char *link_rst_msg = "Resetting link ";
static const char *link_unk_evt = "Unknown link event ";
static const struct nla_policy tipc_nl_link_policy[TIPC_NLA_LINK_MAX + 1] = {
[TIPC_NLA_LINK_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_LINK_NAME] = {
.type = NLA_STRING,
.len = TIPC_MAX_LINK_NAME
},
[TIPC_NLA_LINK_MTU] = { .type = NLA_U32 },
[TIPC_NLA_LINK_BROADCAST] = { .type = NLA_FLAG },
[TIPC_NLA_LINK_UP] = { .type = NLA_FLAG },
[TIPC_NLA_LINK_ACTIVE] = { .type = NLA_FLAG },
[TIPC_NLA_LINK_PROP] = { .type = NLA_NESTED },
[TIPC_NLA_LINK_STATS] = { .type = NLA_NESTED },
[TIPC_NLA_LINK_RX] = { .type = NLA_U32 },
[TIPC_NLA_LINK_TX] = { .type = NLA_U32 }
};
/* Properties valid for media, bearar and link */
static const struct nla_policy tipc_nl_prop_policy[TIPC_NLA_PROP_MAX + 1] = {
[TIPC_NLA_PROP_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_PROP_PRIO] = { .type = NLA_U32 },
[TIPC_NLA_PROP_TOL] = { .type = NLA_U32 },
[TIPC_NLA_PROP_WIN] = { .type = NLA_U32 }
};
/*
* Out-of-range value for link session numbers
*/
@ -2376,3 +2402,433 @@ static void link_print(struct tipc_link *l_ptr, const char *str)
else
pr_cont("\n");
}
/* Parse and validate nested (link) properties valid for media, bearer and link
*/
int tipc_nl_parse_link_prop(struct nlattr *prop, struct nlattr *props[])
{
int err;
err = nla_parse_nested(props, TIPC_NLA_PROP_MAX, prop,
tipc_nl_prop_policy);
if (err)
return err;
if (props[TIPC_NLA_PROP_PRIO]) {
u32 prio;
prio = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
if (prio > TIPC_MAX_LINK_PRI)
return -EINVAL;
}
if (props[TIPC_NLA_PROP_TOL]) {
u32 tol;
tol = nla_get_u32(props[TIPC_NLA_PROP_TOL]);
if ((tol < TIPC_MIN_LINK_TOL) || (tol > TIPC_MAX_LINK_TOL))
return -EINVAL;
}
if (props[TIPC_NLA_PROP_WIN]) {
u32 win;
win = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
if ((win < TIPC_MIN_LINK_WIN) || (win > TIPC_MAX_LINK_WIN))
return -EINVAL;
}
return 0;
}
int tipc_nl_link_set(struct sk_buff *skb, struct genl_info *info)
{
int err;
int res = 0;
int bearer_id;
char *name;
struct tipc_link *link;
struct tipc_node *node;
struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];
if (!info->attrs[TIPC_NLA_LINK])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_LINK_MAX,
info->attrs[TIPC_NLA_LINK],
tipc_nl_link_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_LINK_NAME])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_LINK_NAME]);
node = tipc_link_find_owner(name, &bearer_id);
if (!node)
return -EINVAL;
tipc_node_lock(node);
link = node->links[bearer_id];
if (!link) {
res = -EINVAL;
goto out;
}
if (attrs[TIPC_NLA_LINK_PROP]) {
struct nlattr *props[TIPC_NLA_PROP_MAX + 1];
err = tipc_nl_parse_link_prop(attrs[TIPC_NLA_LINK_PROP],
props);
if (err) {
res = err;
goto out;
}
if (props[TIPC_NLA_PROP_TOL]) {
u32 tol;
tol = nla_get_u32(props[TIPC_NLA_PROP_TOL]);
link_set_supervision_props(link, tol);
tipc_link_proto_xmit(link, STATE_MSG, 0, 0, tol, 0, 0);
}
if (props[TIPC_NLA_PROP_PRIO]) {
u32 prio;
prio = nla_get_u32(props[TIPC_NLA_PROP_PRIO]);
link->priority = prio;
tipc_link_proto_xmit(link, STATE_MSG, 0, 0, 0, prio, 0);
}
if (props[TIPC_NLA_PROP_WIN]) {
u32 win;
win = nla_get_u32(props[TIPC_NLA_PROP_WIN]);
tipc_link_set_queue_limits(link, win);
}
}
out:
tipc_node_unlock(node);
return res;
}
int __tipc_nl_add_stats(struct sk_buff *skb, struct tipc_stats *s)
{
int i;
struct nlattr *stats;
struct nla_map {
u32 key;
u32 val;
};
struct nla_map map[] = {
{TIPC_NLA_STATS_RX_INFO, s->recv_info},
{TIPC_NLA_STATS_RX_FRAGMENTS, s->recv_fragments},
{TIPC_NLA_STATS_RX_FRAGMENTED, s->recv_fragmented},
{TIPC_NLA_STATS_RX_BUNDLES, s->recv_bundles},
{TIPC_NLA_STATS_RX_BUNDLED, s->recv_bundled},
{TIPC_NLA_STATS_TX_INFO, s->sent_info},
{TIPC_NLA_STATS_TX_FRAGMENTS, s->sent_fragments},
{TIPC_NLA_STATS_TX_FRAGMENTED, s->sent_fragmented},
{TIPC_NLA_STATS_TX_BUNDLES, s->sent_bundles},
{TIPC_NLA_STATS_TX_BUNDLED, s->sent_bundled},
{TIPC_NLA_STATS_MSG_PROF_TOT, (s->msg_length_counts) ?
s->msg_length_counts : 1},
{TIPC_NLA_STATS_MSG_LEN_CNT, s->msg_length_counts},
{TIPC_NLA_STATS_MSG_LEN_TOT, s->msg_lengths_total},
{TIPC_NLA_STATS_MSG_LEN_P0, s->msg_length_profile[0]},
{TIPC_NLA_STATS_MSG_LEN_P1, s->msg_length_profile[1]},
{TIPC_NLA_STATS_MSG_LEN_P2, s->msg_length_profile[2]},
{TIPC_NLA_STATS_MSG_LEN_P3, s->msg_length_profile[3]},
{TIPC_NLA_STATS_MSG_LEN_P4, s->msg_length_profile[4]},
{TIPC_NLA_STATS_MSG_LEN_P5, s->msg_length_profile[5]},
{TIPC_NLA_STATS_MSG_LEN_P6, s->msg_length_profile[6]},
{TIPC_NLA_STATS_RX_STATES, s->recv_states},
{TIPC_NLA_STATS_RX_PROBES, s->recv_probes},
{TIPC_NLA_STATS_RX_NACKS, s->recv_nacks},
{TIPC_NLA_STATS_RX_DEFERRED, s->deferred_recv},
{TIPC_NLA_STATS_TX_STATES, s->sent_states},
{TIPC_NLA_STATS_TX_PROBES, s->sent_probes},
{TIPC_NLA_STATS_TX_NACKS, s->sent_nacks},
{TIPC_NLA_STATS_TX_ACKS, s->sent_acks},
{TIPC_NLA_STATS_RETRANSMITTED, s->retransmitted},
{TIPC_NLA_STATS_DUPLICATES, s->duplicates},
{TIPC_NLA_STATS_LINK_CONGS, s->link_congs},
{TIPC_NLA_STATS_MAX_QUEUE, s->max_queue_sz},
{TIPC_NLA_STATS_AVG_QUEUE, s->queue_sz_counts ?
(s->accu_queue_sz / s->queue_sz_counts) : 0}
};
stats = nla_nest_start(skb, TIPC_NLA_LINK_STATS);
if (!stats)
return -EMSGSIZE;
for (i = 0; i < ARRAY_SIZE(map); i++)
if (nla_put_u32(skb, map[i].key, map[i].val))
goto msg_full;
nla_nest_end(skb, stats);
return 0;
msg_full:
nla_nest_cancel(skb, stats);
return -EMSGSIZE;
}
/* Caller should hold appropriate locks to protect the link */
int __tipc_nl_add_link(struct tipc_nl_msg *msg, struct tipc_link *link)
{
int err;
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_LINK_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_LINK);
if (!attrs)
goto msg_full;
if (nla_put_string(msg->skb, TIPC_NLA_LINK_NAME, link->name))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_DEST,
tipc_cluster_mask(tipc_own_addr)))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_MTU, link->max_pkt))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, link->next_in_no))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, link->next_out_no))
goto attr_msg_full;
if (tipc_link_is_up(link))
if (nla_put_flag(msg->skb, TIPC_NLA_LINK_UP))
goto attr_msg_full;
if (tipc_link_is_active(link))
if (nla_put_flag(msg->skb, TIPC_NLA_LINK_ACTIVE))
goto attr_msg_full;
prop = nla_nest_start(msg->skb, TIPC_NLA_LINK_PROP);
if (!prop)
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_PRIO, link->priority))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_TOL, link->tolerance))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_WIN,
link->queue_limit[TIPC_LOW_IMPORTANCE]))
goto prop_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PROP_PRIO, link->priority))
goto prop_msg_full;
nla_nest_end(msg->skb, prop);
err = __tipc_nl_add_stats(msg->skb, &link->stats);
if (err)
goto attr_msg_full;
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
prop_msg_full:
nla_nest_cancel(msg->skb, prop);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
/* Caller should hold node lock */
int __tipc_nl_add_node_links(struct tipc_nl_msg *msg, struct tipc_node *node,
u32 *prev_link)
{
u32 i;
int err;
for (i = *prev_link; i < MAX_BEARERS; i++) {
*prev_link = i;
if (!node->links[i])
continue;
err = __tipc_nl_add_link(msg, node->links[i]);
if (err)
return err;
}
*prev_link = 0;
return 0;
}
int tipc_nl_link_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
struct tipc_node *node;
struct tipc_nl_msg msg;
u32 prev_node = cb->args[0];
u32 prev_link = cb->args[1];
int done = cb->args[2];
int err;
if (done)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
rcu_read_lock();
if (prev_node) {
node = tipc_node_find(prev_node);
if (!node) {
/* We never set seq or call nl_dump_check_consistent()
* this means that setting prev_seq here will cause the
* consistence check to fail in the netlink callback
* handler. Resulting in the last NLMSG_DONE message
* having the NLM_F_DUMP_INTR flag set.
*/
cb->prev_seq = 1;
goto out;
}
list_for_each_entry_continue_rcu(node, &tipc_node_list, list) {
tipc_node_lock(node);
err = __tipc_nl_add_node_links(&msg, node, &prev_link);
tipc_node_unlock(node);
if (err)
goto out;
prev_node = node->addr;
}
} else {
err = tipc_nl_add_bc_link(&msg);
if (err)
goto out;
list_for_each_entry_rcu(node, &tipc_node_list, list) {
tipc_node_lock(node);
err = __tipc_nl_add_node_links(&msg, node, &prev_link);
tipc_node_unlock(node);
if (err)
goto out;
prev_node = node->addr;
}
}
done = 1;
out:
rcu_read_unlock();
cb->args[0] = prev_node;
cb->args[1] = prev_link;
cb->args[2] = done;
return skb->len;
}
int tipc_nl_link_get(struct sk_buff *skb, struct genl_info *info)
{
struct sk_buff *ans_skb;
struct tipc_nl_msg msg;
struct tipc_link *link;
struct tipc_node *node;
char *name;
int bearer_id;
int err;
if (!info->attrs[TIPC_NLA_LINK_NAME])
return -EINVAL;
name = nla_data(info->attrs[TIPC_NLA_LINK_NAME]);
node = tipc_link_find_owner(name, &bearer_id);
if (!node)
return -EINVAL;
ans_skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
if (!ans_skb)
return -ENOMEM;
msg.skb = ans_skb;
msg.portid = info->snd_portid;
msg.seq = info->snd_seq;
tipc_node_lock(node);
link = node->links[bearer_id];
if (!link) {
err = -EINVAL;
goto err_out;
}
err = __tipc_nl_add_link(&msg, link);
if (err)
goto err_out;
tipc_node_unlock(node);
return genlmsg_reply(ans_skb, info);
err_out:
tipc_node_unlock(node);
nlmsg_free(ans_skb);
return err;
}
int tipc_nl_link_reset_stats(struct sk_buff *skb, struct genl_info *info)
{
int err;
char *link_name;
unsigned int bearer_id;
struct tipc_link *link;
struct tipc_node *node;
struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];
if (!info->attrs[TIPC_NLA_LINK])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_LINK_MAX,
info->attrs[TIPC_NLA_LINK],
tipc_nl_link_policy);
if (err)
return err;
if (!attrs[TIPC_NLA_LINK_NAME])
return -EINVAL;
link_name = nla_data(attrs[TIPC_NLA_LINK_NAME]);
if (strcmp(link_name, tipc_bclink_name) == 0) {
err = tipc_bclink_reset_stats();
if (err)
return err;
return 0;
}
node = tipc_link_find_owner(link_name, &bearer_id);
if (!node)
return -EINVAL;
tipc_node_lock(node);
link = node->links[bearer_id];
if (!link) {
tipc_node_unlock(node);
return -EINVAL;
}
link_reset_statistics(link);
tipc_node_unlock(node);
return 0;
}

View File

@ -37,6 +37,7 @@
#ifndef _TIPC_LINK_H
#define _TIPC_LINK_H
#include <net/genetlink.h>
#include "msg.h"
#include "node.h"
@ -239,6 +240,12 @@ void tipc_link_set_queue_limits(struct tipc_link *l_ptr, u32 window);
void tipc_link_retransmit(struct tipc_link *l_ptr,
struct sk_buff *start, u32 retransmits);
int tipc_nl_link_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_link_get(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_link_set(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_link_reset_stats(struct sk_buff *skb, struct genl_info *info);
int tipc_nl_parse_link_prop(struct nlattr *prop, struct nlattr *props[]);
/*
* Link sequence number manipulation routines (uses modulo 2**16 arithmetic)
*/

View File

@ -1,7 +1,7 @@
/*
* net/tipc/name_table.c: TIPC name table code
*
* Copyright (c) 2000-2006, Ericsson AB
* Copyright (c) 2000-2006, 2014, Ericsson AB
* Copyright (c) 2004-2008, 2010-2011, Wind River Systems
* All rights reserved.
*
@ -42,6 +42,12 @@
#define TIPC_NAMETBL_SIZE 1024 /* must be a power of 2 */
static const struct nla_policy
tipc_nl_name_table_policy[TIPC_NLA_NAME_TABLE_MAX + 1] = {
[TIPC_NLA_NAME_TABLE_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_NAME_TABLE_PUBL] = { .type = NLA_NESTED }
};
/**
* struct name_info - name sequence publication info
* @node_list: circular list of publications made by own node
@ -995,3 +1001,185 @@ void tipc_nametbl_stop(void)
table.types = NULL;
write_unlock_bh(&tipc_nametbl_lock);
}
int __tipc_nl_add_nametable_publ(struct tipc_nl_msg *msg, struct name_seq *seq,
struct sub_seq *sseq, u32 *last_publ)
{
void *hdr;
struct nlattr *attrs;
struct nlattr *publ;
struct publication *p;
if (*last_publ) {
list_for_each_entry(p, &sseq->info->zone_list, zone_list)
if (p->key == *last_publ)
break;
if (p->key != *last_publ)
return -EPIPE;
} else {
p = list_first_entry(&sseq->info->zone_list, struct publication,
zone_list);
}
list_for_each_entry_from(p, &sseq->info->zone_list, zone_list) {
*last_publ = p->key;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq,
&tipc_genl_v2_family, NLM_F_MULTI,
TIPC_NL_NAME_TABLE_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_NAME_TABLE);
if (!attrs)
goto msg_full;
publ = nla_nest_start(msg->skb, TIPC_NLA_NAME_TABLE_PUBL);
if (!publ)
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_TYPE, seq->type))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_LOWER, sseq->lower))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_UPPER, sseq->upper))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_SCOPE, p->scope))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_NODE, p->node))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_REF, p->ref))
goto publ_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_PUBL_KEY, p->key))
goto publ_msg_full;
nla_nest_end(msg->skb, publ);
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
}
*last_publ = 0;
return 0;
publ_msg_full:
nla_nest_cancel(msg->skb, publ);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int __tipc_nl_subseq_list(struct tipc_nl_msg *msg, struct name_seq *seq,
u32 *last_lower, u32 *last_publ)
{
struct sub_seq *sseq;
struct sub_seq *sseq_start;
int err;
if (*last_lower) {
sseq_start = nameseq_find_subseq(seq, *last_lower);
if (!sseq_start)
return -EPIPE;
} else {
sseq_start = seq->sseqs;
}
for (sseq = sseq_start; sseq != &seq->sseqs[seq->first_free]; sseq++) {
err = __tipc_nl_add_nametable_publ(msg, seq, sseq, last_publ);
if (err) {
*last_lower = sseq->lower;
return err;
}
}
*last_lower = 0;
return 0;
}
int __tipc_nl_seq_list(struct tipc_nl_msg *msg, u32 *last_type, u32 *last_lower,
u32 *last_publ)
{
struct hlist_head *seq_head;
struct name_seq *seq;
int err;
int i;
if (*last_type)
i = hash(*last_type);
else
i = 0;
for (; i < TIPC_NAMETBL_SIZE; i++) {
seq_head = &table.types[i];
if (*last_type) {
seq = nametbl_find_seq(*last_type);
if (!seq)
return -EPIPE;
} else {
seq = hlist_entry_safe((seq_head)->first,
struct name_seq, ns_list);
if (!seq)
continue;
}
hlist_for_each_entry_from(seq, ns_list) {
spin_lock_bh(&seq->lock);
err = __tipc_nl_subseq_list(msg, seq, last_lower,
last_publ);
if (err) {
*last_type = seq->type;
spin_unlock_bh(&seq->lock);
return err;
}
spin_unlock_bh(&seq->lock);
}
*last_type = 0;
}
return 0;
}
int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
int done = cb->args[3];
u32 last_type = cb->args[0];
u32 last_lower = cb->args[1];
u32 last_publ = cb->args[2];
struct tipc_nl_msg msg;
if (done)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
read_lock_bh(&tipc_nametbl_lock);
err = __tipc_nl_seq_list(&msg, &last_type, &last_lower, &last_publ);
if (!err) {
done = 1;
} else if (err != -EMSGSIZE) {
/* We never set seq or call nl_dump_check_consistent() this
* means that setting prev_seq here will cause the consistence
* check to fail in the netlink callback handler. Resulting in
* the NLMSG_DONE message having the NLM_F_DUMP_INTR flag set if
* we got an error.
*/
cb->prev_seq = 1;
}
read_unlock_bh(&tipc_nametbl_lock);
cb->args[0] = last_type;
cb->args[1] = last_lower;
cb->args[2] = last_publ;
cb->args[3] = done;
return skb->len;
}

View File

@ -1,7 +1,7 @@
/*
* net/tipc/name_table.h: Include file for TIPC name table code
*
* Copyright (c) 2000-2006, Ericsson AB
* Copyright (c) 2000-2006, 2014, Ericsson AB
* Copyright (c) 2004-2005, 2010-2011, Wind River Systems
* All rights reserved.
*
@ -84,6 +84,8 @@ struct publication {
extern rwlock_t tipc_nametbl_lock;
int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb);
struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space);
u32 tipc_nametbl_translate(u32 type, u32 instance, u32 *node);
int tipc_nametbl_mc_translate(u32 type, u32 lower, u32 upper, u32 limit,

View File

@ -42,6 +42,11 @@
#include "node.h"
#include "config.h"
static const struct nla_policy tipc_nl_net_policy[TIPC_NLA_NET_MAX + 1] = {
[TIPC_NLA_NET_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_NET_ID] = { .type = NLA_U32 }
};
/*
* The TIPC locking policy is designed to ensure a very fine locking
* granularity, permitting complete parallel access to individual
@ -138,3 +143,104 @@ void tipc_net_stop(void)
pr_info("Left network mode\n");
}
static int __tipc_nl_add_net(struct tipc_nl_msg *msg)
{
void *hdr;
struct nlattr *attrs;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_NET_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_NET);
if (!attrs)
goto msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_NET_ID, tipc_net_id))
goto attr_msg_full;
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
int done = cb->args[0];
struct tipc_nl_msg msg;
if (done)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
err = __tipc_nl_add_net(&msg);
if (err)
goto out;
done = 1;
out:
cb->args[0] = done;
return skb->len;
}
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info)
{
int err;
struct nlattr *attrs[TIPC_NLA_NET_MAX + 1];
if (!info->attrs[TIPC_NLA_NET])
return -EINVAL;
err = nla_parse_nested(attrs, TIPC_NLA_NET_MAX,
info->attrs[TIPC_NLA_NET],
tipc_nl_net_policy);
if (err)
return err;
if (attrs[TIPC_NLA_NET_ID]) {
u32 val;
/* Can't change net id once TIPC has joined a network */
if (tipc_own_addr)
return -EPERM;
val = nla_get_u32(attrs[TIPC_NLA_NET_ID]);
if (val < 1 || val > 9999)
return -EINVAL;
tipc_net_id = val;
}
if (attrs[TIPC_NLA_NET_ADDR]) {
u32 addr;
/* Can't change net addr once TIPC has joined a network */
if (tipc_own_addr)
return -EPERM;
addr = nla_get_u32(attrs[TIPC_NLA_NET_ADDR]);
if (!tipc_addr_node_valid(addr))
return -EINVAL;
rtnl_lock();
tipc_net_start(addr);
rtnl_unlock();
}
return 0;
}

View File

@ -1,7 +1,7 @@
/*
* net/tipc/net.h: Include file for TIPC network routing code
*
* Copyright (c) 1995-2006, Ericsson AB
* Copyright (c) 1995-2006, 2014, Ericsson AB
* Copyright (c) 2005, 2010-2011, Wind River Systems
* All rights reserved.
*
@ -37,7 +37,13 @@
#ifndef _TIPC_NET_H
#define _TIPC_NET_H
#include <net/genetlink.h>
int tipc_net_start(u32 addr);
void tipc_net_stop(void);
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
#endif

View File

@ -1,7 +1,7 @@
/*
* net/tipc/netlink.c: TIPC configuration handling
*
* Copyright (c) 2005-2006, Ericsson AB
* Copyright (c) 2005-2006, 2014, Ericsson AB
* Copyright (c) 2005-2007, Wind River Systems
* All rights reserved.
*
@ -36,6 +36,12 @@
#include "core.h"
#include "config.h"
#include "socket.h"
#include "name_table.h"
#include "bearer.h"
#include "link.h"
#include "node.h"
#include "net.h"
#include <net/genetlink.h>
static int handle_cmd(struct sk_buff *skb, struct genl_info *info)
@ -68,6 +74,19 @@ static int handle_cmd(struct sk_buff *skb, struct genl_info *info)
return 0;
}
static const struct nla_policy tipc_nl_policy[TIPC_NLA_MAX + 1] = {
[TIPC_NLA_UNSPEC] = { .type = NLA_UNSPEC, },
[TIPC_NLA_BEARER] = { .type = NLA_NESTED, },
[TIPC_NLA_SOCK] = { .type = NLA_NESTED, },
[TIPC_NLA_PUBL] = { .type = NLA_NESTED, },
[TIPC_NLA_LINK] = { .type = NLA_NESTED, },
[TIPC_NLA_MEDIA] = { .type = NLA_NESTED, },
[TIPC_NLA_NODE] = { .type = NLA_NESTED, },
[TIPC_NLA_NET] = { .type = NLA_NESTED, },
[TIPC_NLA_NAME_TABLE] = { .type = NLA_NESTED, }
};
/* Legacy ASCII API */
static struct genl_family tipc_genl_family = {
.id = GENL_ID_GENERATE,
.name = TIPC_GENL_NAME,
@ -76,6 +95,7 @@ static struct genl_family tipc_genl_family = {
.maxattr = 0,
};
/* Legacy ASCII API */
static struct genl_ops tipc_genl_ops[] = {
{
.cmd = TIPC_GENL_CMD,
@ -83,11 +103,121 @@ static struct genl_ops tipc_genl_ops[] = {
},
};
/* Users of the legacy API (tipc-config) can't handle that we add operations,
* so we have a separate genl handling for the new API.
*/
struct genl_family tipc_genl_v2_family = {
.id = GENL_ID_GENERATE,
.name = TIPC_GENL_V2_NAME,
.version = TIPC_GENL_V2_VERSION,
.hdrsize = 0,
.maxattr = TIPC_NLA_MAX,
};
static const struct genl_ops tipc_genl_v2_ops[] = {
{
.cmd = TIPC_NL_BEARER_DISABLE,
.doit = tipc_nl_bearer_disable,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_BEARER_ENABLE,
.doit = tipc_nl_bearer_enable,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_BEARER_GET,
.doit = tipc_nl_bearer_get,
.dumpit = tipc_nl_bearer_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_BEARER_SET,
.doit = tipc_nl_bearer_set,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_SOCK_GET,
.dumpit = tipc_nl_sk_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_PUBL_GET,
.dumpit = tipc_nl_publ_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_LINK_GET,
.doit = tipc_nl_link_get,
.dumpit = tipc_nl_link_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_LINK_SET,
.doit = tipc_nl_link_set,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_LINK_RESET_STATS,
.doit = tipc_nl_link_reset_stats,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_MEDIA_GET,
.doit = tipc_nl_media_get,
.dumpit = tipc_nl_media_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_MEDIA_SET,
.doit = tipc_nl_media_set,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_NODE_GET,
.dumpit = tipc_nl_node_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_NET_GET,
.dumpit = tipc_nl_net_dump,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_NET_SET,
.doit = tipc_nl_net_set,
.policy = tipc_nl_policy,
},
{
.cmd = TIPC_NL_NAME_TABLE_GET,
.dumpit = tipc_nl_name_table_dump,
.policy = tipc_nl_policy,
}
};
int tipc_nlmsg_parse(const struct nlmsghdr *nlh, struct nlattr ***attr)
{
u32 maxattr = tipc_genl_v2_family.maxattr;
*attr = tipc_genl_v2_family.attrbuf;
if (!*attr)
return -EOPNOTSUPP;
return nlmsg_parse(nlh, GENL_HDRLEN, *attr, maxattr, tipc_nl_policy);
}
int tipc_netlink_start(void)
{
int res;
res = genl_register_family_with_ops(&tipc_genl_family, tipc_genl_ops);
if (res) {
pr_err("Failed to register legacy interface\n");
return res;
}
res = genl_register_family_with_ops(&tipc_genl_v2_family,
tipc_genl_v2_ops);
if (res) {
pr_err("Failed to register netlink interface\n");
return res;
@ -98,4 +228,5 @@ int tipc_netlink_start(void)
void tipc_netlink_stop(void)
{
genl_unregister_family(&tipc_genl_family);
genl_unregister_family(&tipc_genl_v2_family);
}

48
net/tipc/netlink.h Normal file
View File

@ -0,0 +1,48 @@
/*
* net/tipc/netlink.h: Include file for TIPC netlink code
*
* Copyright (c) 2014, Ericsson AB
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the names of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _TIPC_NETLINK_H
#define _TIPC_NETLINK_H
extern struct genl_family tipc_genl_v2_family;
int tipc_nlmsg_parse(const struct nlmsghdr *nlh, struct nlattr ***buf);
struct tipc_nl_msg {
struct sk_buff *skb;
u32 portid;
u32 seq;
};
#endif

View File

@ -58,6 +58,12 @@ struct tipc_sock_conn {
struct list_head list;
};
static const struct nla_policy tipc_nl_node_policy[TIPC_NLA_NODE_MAX + 1] = {
[TIPC_NLA_NODE_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_NODE_ADDR] = { .type = NLA_U32 },
[TIPC_NLA_NODE_UP] = { .type = NLA_FLAG }
};
/*
* A trivial power-of-two bitmask technique is used for speed, since this
* operation is done for every incoming TIPC packet. The number of hash table
@ -601,3 +607,93 @@ void tipc_node_unlock(struct tipc_node *node)
tipc_nametbl_withdraw(TIPC_LINK_STATE, addr,
link_id, addr);
}
/* Caller should hold node lock for the passed node */
int __tipc_nl_add_node(struct tipc_nl_msg *msg, struct tipc_node *node)
{
void *hdr;
struct nlattr *attrs;
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_NODE_GET);
if (!hdr)
return -EMSGSIZE;
attrs = nla_nest_start(msg->skb, TIPC_NLA_NODE);
if (!attrs)
goto msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_NODE_ADDR, node->addr))
goto attr_msg_full;
if (tipc_node_is_up(node))
if (nla_put_flag(msg->skb, TIPC_NLA_NODE_UP))
goto attr_msg_full;
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
return 0;
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
int done = cb->args[0];
int last_addr = cb->args[1];
struct tipc_node *node;
struct tipc_nl_msg msg;
if (done)
return 0;
msg.skb = skb;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
rcu_read_lock();
if (last_addr && !tipc_node_find(last_addr)) {
rcu_read_unlock();
/* We never set seq or call nl_dump_check_consistent() this
* means that setting prev_seq here will cause the consistence
* check to fail in the netlink callback handler. Resulting in
* the NLMSG_DONE message having the NLM_F_DUMP_INTR flag set if
* the node state changed while we released the lock.
*/
cb->prev_seq = 1;
return -EPIPE;
}
list_for_each_entry_rcu(node, &tipc_node_list, list) {
if (last_addr) {
if (node->addr == last_addr)
last_addr = 0;
else
continue;
}
tipc_node_lock(node);
err = __tipc_nl_add_node(&msg, node);
if (err) {
last_addr = node->addr;
tipc_node_unlock(node);
goto out;
}
tipc_node_unlock(node);
}
done = 1;
out:
cb->args[0] = done;
cb->args[1] = last_addr;
rcu_read_unlock();
return skb->len;
}

View File

@ -1,7 +1,7 @@
/*
* net/tipc/node.h: Include file for TIPC node management routines
*
* Copyright (c) 2000-2006, Ericsson AB
* Copyright (c) 2000-2006, 2014, Ericsson AB
* Copyright (c) 2005, 2010-2014, Wind River Systems
* All rights reserved.
*
@ -145,6 +145,8 @@ void tipc_node_unlock(struct tipc_node *node);
int tipc_node_add_conn(u32 dnode, u32 port, u32 peer_port);
void tipc_node_remove_conn(u32 dnode, u32 port);
int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb);
static inline void tipc_node_lock(struct tipc_node *node)
{
spin_lock_bh(&node->lock);

View File

@ -121,6 +121,14 @@ static const struct proto_ops msg_ops;
static struct proto tipc_proto;
static struct proto tipc_proto_kern;
static const struct nla_policy tipc_nl_sock_policy[TIPC_NLA_SOCK_MAX + 1] = {
[TIPC_NLA_SOCK_UNSPEC] = { .type = NLA_UNSPEC },
[TIPC_NLA_SOCK_ADDR] = { .type = NLA_U32 },
[TIPC_NLA_SOCK_REF] = { .type = NLA_U32 },
[TIPC_NLA_SOCK_CON] = { .type = NLA_NESTED },
[TIPC_NLA_SOCK_HAS_PUBL] = { .type = NLA_FLAG }
};
/*
* Revised TIPC socket locking policy:
*
@ -2801,3 +2809,231 @@ void tipc_socket_stop(void)
sock_unregister(tipc_family_ops.family);
proto_unregister(&tipc_proto);
}
/* Caller should hold socket lock for the passed tipc socket. */
int __tipc_nl_add_sk_con(struct sk_buff *skb, struct tipc_sock *tsk)
{
u32 peer_node;
u32 peer_port;
struct nlattr *nest;
peer_node = tsk_peer_node(tsk);
peer_port = tsk_peer_port(tsk);
nest = nla_nest_start(skb, TIPC_NLA_SOCK_CON);
if (nla_put_u32(skb, TIPC_NLA_CON_NODE, peer_node))
goto msg_full;
if (nla_put_u32(skb, TIPC_NLA_CON_SOCK, peer_port))
goto msg_full;
if (tsk->conn_type != 0) {
if (nla_put_flag(skb, TIPC_NLA_CON_FLAG))
goto msg_full;
if (nla_put_u32(skb, TIPC_NLA_CON_TYPE, tsk->conn_type))
goto msg_full;
if (nla_put_u32(skb, TIPC_NLA_CON_INST, tsk->conn_instance))
goto msg_full;
}
nla_nest_end(skb, nest);
return 0;
msg_full:
nla_nest_cancel(skb, nest);
return -EMSGSIZE;
}
/* Caller should hold socket lock for the passed tipc socket. */
int __tipc_nl_add_sk(struct sk_buff *skb, struct netlink_callback *cb,
struct tipc_sock *tsk)
{
int err;
void *hdr;
struct nlattr *attrs;
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&tipc_genl_v2_family, NLM_F_MULTI, TIPC_NL_SOCK_GET);
if (!hdr)
goto msg_cancel;
attrs = nla_nest_start(skb, TIPC_NLA_SOCK);
if (!attrs)
goto genlmsg_cancel;
if (nla_put_u32(skb, TIPC_NLA_SOCK_REF, tsk->ref))
goto attr_msg_cancel;
if (nla_put_u32(skb, TIPC_NLA_SOCK_ADDR, tipc_own_addr))
goto attr_msg_cancel;
if (tsk->connected) {
err = __tipc_nl_add_sk_con(skb, tsk);
if (err)
goto attr_msg_cancel;
} else if (!list_empty(&tsk->publications)) {
if (nla_put_flag(skb, TIPC_NLA_SOCK_HAS_PUBL))
goto attr_msg_cancel;
}
nla_nest_end(skb, attrs);
genlmsg_end(skb, hdr);
return 0;
attr_msg_cancel:
nla_nest_cancel(skb, attrs);
genlmsg_cancel:
genlmsg_cancel(skb, hdr);
msg_cancel:
return -EMSGSIZE;
}
int tipc_nl_sk_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
struct tipc_sock *tsk;
u32 prev_ref = cb->args[0];
u32 ref = prev_ref;
tsk = tipc_sk_get_next(&ref);
for (; tsk; tsk = tipc_sk_get_next(&ref)) {
lock_sock(&tsk->sk);
err = __tipc_nl_add_sk(skb, cb, tsk);
release_sock(&tsk->sk);
tipc_sk_put(tsk);
if (err)
break;
prev_ref = ref;
}
cb->args[0] = prev_ref;
return skb->len;
}
/* Caller should hold socket lock for the passed tipc socket. */
int __tipc_nl_add_sk_publ(struct sk_buff *skb, struct netlink_callback *cb,
struct publication *publ)
{
void *hdr;
struct nlattr *attrs;
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&tipc_genl_v2_family, NLM_F_MULTI, TIPC_NL_PUBL_GET);
if (!hdr)
goto msg_cancel;
attrs = nla_nest_start(skb, TIPC_NLA_PUBL);
if (!attrs)
goto genlmsg_cancel;
if (nla_put_u32(skb, TIPC_NLA_PUBL_KEY, publ->key))
goto attr_msg_cancel;
if (nla_put_u32(skb, TIPC_NLA_PUBL_TYPE, publ->type))
goto attr_msg_cancel;
if (nla_put_u32(skb, TIPC_NLA_PUBL_LOWER, publ->lower))
goto attr_msg_cancel;
if (nla_put_u32(skb, TIPC_NLA_PUBL_UPPER, publ->upper))
goto attr_msg_cancel;
nla_nest_end(skb, attrs);
genlmsg_end(skb, hdr);
return 0;
attr_msg_cancel:
nla_nest_cancel(skb, attrs);
genlmsg_cancel:
genlmsg_cancel(skb, hdr);
msg_cancel:
return -EMSGSIZE;
}
/* Caller should hold socket lock for the passed tipc socket. */
int __tipc_nl_list_sk_publ(struct sk_buff *skb, struct netlink_callback *cb,
struct tipc_sock *tsk, u32 *last_publ)
{
int err;
struct publication *p;
if (*last_publ) {
list_for_each_entry(p, &tsk->publications, pport_list) {
if (p->key == *last_publ)
break;
}
if (p->key != *last_publ) {
/* We never set seq or call nl_dump_check_consistent()
* this means that setting prev_seq here will cause the
* consistence check to fail in the netlink callback
* handler. Resulting in the last NLMSG_DONE message
* having the NLM_F_DUMP_INTR flag set.
*/
cb->prev_seq = 1;
*last_publ = 0;
return -EPIPE;
}
} else {
p = list_first_entry(&tsk->publications, struct publication,
pport_list);
}
list_for_each_entry_from(p, &tsk->publications, pport_list) {
err = __tipc_nl_add_sk_publ(skb, cb, p);
if (err) {
*last_publ = p->key;
return err;
}
}
*last_publ = 0;
return 0;
}
int tipc_nl_publ_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
u32 tsk_ref = cb->args[0];
u32 last_publ = cb->args[1];
u32 done = cb->args[2];
struct tipc_sock *tsk;
if (!tsk_ref) {
struct nlattr **attrs;
struct nlattr *sock[TIPC_NLA_SOCK_MAX + 1];
err = tipc_nlmsg_parse(cb->nlh, &attrs);
if (err)
return err;
err = nla_parse_nested(sock, TIPC_NLA_SOCK_MAX,
attrs[TIPC_NLA_SOCK],
tipc_nl_sock_policy);
if (err)
return err;
if (!sock[TIPC_NLA_SOCK_REF])
return -EINVAL;
tsk_ref = nla_get_u32(sock[TIPC_NLA_SOCK_REF]);
}
if (done)
return 0;
tsk = tipc_sk_get(tsk_ref);
if (!tsk)
return -EINVAL;
lock_sock(&tsk->sk);
err = __tipc_nl_list_sk_publ(skb, cb, tsk, &last_publ);
if (!err)
done = 1;
release_sock(&tsk->sk);
tipc_sk_put(tsk);
cb->args[0] = tsk_ref;
cb->args[1] = last_publ;
cb->args[2] = done;
return skb->len;
}

View File

@ -36,6 +36,7 @@
#define _TIPC_SOCK_H
#include <net/sock.h>
#include <net/genetlink.h>
#define TIPC_CONNACK_INTV 256
#define TIPC_FLOWCTRL_WIN (TIPC_CONNACK_INTV * 2)
@ -47,5 +48,7 @@ void tipc_sk_mcast_rcv(struct sk_buff *buf);
void tipc_sk_reinit(void);
int tipc_sk_ref_table_init(u32 requested_size, u32 start);
void tipc_sk_ref_table_stop(void);
int tipc_nl_sk_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_publ_dump(struct sk_buff *skb, struct netlink_callback *cb);
#endif