2006-01-03 01:04:38 +07:00
|
|
|
/*
|
|
|
|
* net/tipc/core.c: TIPC module code
|
|
|
|
*
|
tipc: remove 'links' list from tipc_bearer struct
In our ongoing effort to simplify the TIPC locking structure,
we see a need to remove the linked list for tipc_links
in the bearer. This can be explained as follows.
Currently, we have three different ways to access a link,
via three different lists/tables:
1: Via a node hash table:
Used by the time-critical outgoing/incoming data paths.
(e.g. link_send_sections_fast() and tipc_recv_msg() ):
grab net_lock(read)
find node from node hash table
grab node_lock
select link
grab bearer_lock
send_msg()
release bearer_lock
release node lock
release net_lock
2: Via a global linked list for nodes:
Used by configuration commands (link_cmd_set_value())
grab net_lock(read)
find node and link from global node list (using link name)
grab node_lock
update link
release node lock
release net_lock
(Same locking order as above. No problem.)
3: Via the bearer's linked link list:
Used by notifications from interface (e.g. tipc_disable_bearer() )
grab net_lock(write)
grab bearer_lock
get link ptr from bearer's link list
get node from link
grab node_lock
delete link
release node lock
release bearer_lock
release net_lock
(Different order from above, but works because we grab the
outer net_lock in write mode first, excluding all other access.)
The first major goal in our simplification effort is to get rid
of the "big" net_lock, replacing it with rcu-locks when accessing
the node list and node hash array. This will come in a later patch
series.
But to get there we first need to rewrite access methods ##2 and 3,
since removal of net_lock would introduce three major problems:
a) In access method #2, we access the link before taking the
protecting node_lock. This will not work once net_lock is gone,
so we will have to change the access order. We will deal with
this in a later commit in this series, "tipc: add node lock
protection to link found by link_find_link()".
b) When the outer protection from net_lock is gone, taking
bearer_lock and node_lock in opposite order of method 1) and 2)
will become an obvious deadlock hazard. This is fixed in the
commit ("tipc: remove bearer_lock from tipc_bearer struct")
later in this series.
c) Similar to what is described in problem a), access method #3
starts with using a link pointer that is unprotected by node_lock,
in order to via that pointer find the correct node struct and
lock it. Before we remove net_lock, this access order must be
altered. This is what we do with this commit.
We can avoid introducing problem problem c) by even here using the
global node list to find the node, before accessing its links. When
we loop though the node list we use the own bearer identity as search
criteria, thus easily finding the links that are associated to the
resetting/disabling bearer. It should be noted that although this
method is somewhat slower than the current list traversal, it is in
no way time critical. This is only about resetting or deleting links,
something that must be considered relatively infrequent events.
As a bonus, we can get rid of the mutual pointers between links and
bearers. After this commit, pointer dependency go in one direction
only: from the link to the bearer.
This commit pre-empts introduction of problem c) as described above.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-02-14 05:29:09 +07:00
|
|
|
* Copyright (c) 2003-2006, 2013, Ericsson AB
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 21:54:40 +07:00
|
|
|
* Copyright (c) 2005-2006, 2010-2013, Wind River Systems
|
2006-01-03 01:04:38 +07:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
2006-01-11 19:30:43 +07:00
|
|
|
* Redistribution and use in source and binary forms, with or without
|
2006-01-03 01:04:38 +07:00
|
|
|
* modification, are permitted provided that the following conditions are met:
|
|
|
|
*
|
2006-01-11 19:30:43 +07:00
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. Neither the names of the copyright holders nor the names of its
|
|
|
|
* contributors may be used to endorse or promote products derived from
|
|
|
|
* this software without specific prior written permission.
|
2006-01-03 01:04:38 +07:00
|
|
|
*
|
2006-01-11 19:30:43 +07:00
|
|
|
* Alternatively, this software may be distributed under the terms of the
|
|
|
|
* GNU General Public License ("GPL") version 2 as published by the Free
|
|
|
|
* Software Foundation.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
|
|
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
|
|
|
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
2006-01-03 01:04:38 +07:00
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "core.h"
|
|
|
|
#include "name_table.h"
|
|
|
|
#include "subscr.h"
|
2015-02-09 15:50:18 +07:00
|
|
|
#include "bearer.h"
|
|
|
|
#include "net.h"
|
2014-08-23 05:09:18 +07:00
|
|
|
#include "socket.h"
|
2015-10-22 19:51:35 +07:00
|
|
|
#include "bcast.h"
|
2019-03-19 18:49:49 +07:00
|
|
|
#include "node.h"
|
2019-11-08 12:05:11 +07:00
|
|
|
#include "crypto.h"
|
2006-01-03 01:04:38 +07:00
|
|
|
|
2012-06-29 11:16:37 +07:00
|
|
|
#include <linux/module.h>
|
2006-01-03 01:04:38 +07:00
|
|
|
|
|
|
|
/* configurable TIPC parameters */
|
netns: make struct pernet_operations::id unsigned int
Make struct pernet_operations::id unsigned.
There are 2 reasons to do so:
1)
This field is really an index into an zero based array and
thus is unsigned entity. Using negative value is out-of-bound
access by definition.
2)
On x86_64 unsigned 32-bit data which are mixed with pointers
via array indexing or offsets added or subtracted to pointers
are preffered to signed 32-bit data.
"int" being used as an array index needs to be sign-extended
to 64-bit before being used.
void f(long *p, int i)
{
g(p[i]);
}
roughly translates to
movsx rsi, esi
mov rdi, [rsi+...]
call g
MOVSX is 3 byte instruction which isn't necessary if the variable is
unsigned because x86_64 is zero extending by default.
Now, there is net_generic() function which, you guessed it right, uses
"int" as an array index:
static inline void *net_generic(const struct net *net, int id)
{
...
ptr = ng->ptr[id - 1];
...
}
And this function is used a lot, so those sign extensions add up.
Patch snipes ~1730 bytes on allyesconfig kernel (without all junk
messing with code generation):
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
Unfortunately some functions actually grow bigger.
This is a semmingly random artefact of code generation with register
allocator being used differently. gcc decides that some variable
needs to live in new r8+ registers and every access now requires REX
prefix. Or it is shifted into r12, so [r12+0] addressing mode has to be
used which is longer than [r8]
However, overall balance is in negative direction:
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
function old new delta
nfsd4_lock 3886 3959 +73
tipc_link_build_proto_msg 1096 1140 +44
mac80211_hwsim_new_radio 2776 2808 +32
tipc_mon_rcv 1032 1058 +26
svcauth_gss_legacy_init 1413 1429 +16
tipc_bcbase_select_primary 379 392 +13
nfsd4_exchange_id 1247 1260 +13
nfsd4_setclientid_confirm 782 793 +11
...
put_client_renew_locked 494 480 -14
ip_set_sockfn_get 730 716 -14
geneve_sock_add 829 813 -16
nfsd4_sequence_done 721 703 -18
nlmclnt_lookup_host 708 686 -22
nfsd4_lockt 1085 1063 -22
nfs_get_client 1077 1050 -27
tcf_bpf_init 1106 1076 -30
nfsd4_encode_fattr 5997 5930 -67
Total: Before=154856051, After=154854321, chg -0.00%
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-17 08:58:21 +07:00
|
|
|
unsigned int tipc_net_id __read_mostly;
|
2013-06-17 21:54:37 +07:00
|
|
|
int sysctl_tipc_rmem[3] __read_mostly; /* min/default/max */
|
2006-01-03 01:04:38 +07:00
|
|
|
|
2015-01-09 14:27:04 +07:00
|
|
|
static int __net_init tipc_init_net(struct net *net)
|
|
|
|
{
|
|
|
|
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
2015-01-09 14:27:08 +07:00
|
|
|
int err;
|
2015-01-09 14:27:04 +07:00
|
|
|
|
|
|
|
tn->net_id = 4711;
|
2018-03-23 02:42:50 +07:00
|
|
|
tn->node_addr = 0;
|
tipc: handle collisions of 32-bit node address hash values
When a 32-bit node address is generated from a 128-bit identifier,
there is a risk of collisions which must be discovered and handled.
We do this as follows:
- We don't apply the generated address immediately to the node, but do
instead initiate a 1 sec trial period to allow other cluster members
to discover and handle such collisions.
- During the trial period the node periodically sends out a new type
of message, DSC_TRIAL_MSG, using broadcast or emulated broadcast,
to all the other nodes in the cluster.
- When a node is receiving such a message, it must check that the
presented 32-bit identifier either is unused, or was used by the very
same peer in a previous session. In both cases it accepts the request
by not responding to it.
- If it finds that the same node has been up before using a different
address, it responds with a DSC_TRIAL_FAIL_MSG containing that
address.
- If it finds that the address has already been taken by some other
node, it generates a new, unused address and returns it to the
requester.
- During the trial period the requesting node must always be prepared
to accept a failure message, i.e., a message where a peer suggests a
different (or equal) address to the one tried. In those cases it
must apply the suggested value as trial address and restart the trial
period.
This algorithm ensures that in the vast majority of cases a node will
have the same address before and after a reboot. If a legacy user
configures the address explicitly, there will be no trial period and
messages, so this protocol addition is completely backwards compatible.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 02:42:51 +07:00
|
|
|
tn->trial_addr = 0;
|
|
|
|
tn->addr_trial_end = 0;
|
2019-03-19 18:49:49 +07:00
|
|
|
tn->capabilities = TIPC_NODE_CAPABILITIES;
|
2020-09-07 13:17:25 +07:00
|
|
|
INIT_WORK(&tn->final_work.work, tipc_net_finalize_work);
|
2018-03-23 02:42:50 +07:00
|
|
|
memset(tn->node_id, 0, sizeof(tn->node_id));
|
|
|
|
memset(tn->node_id_string, 0, sizeof(tn->node_id_string));
|
tipc: add neighbor monitoring framework
TIPC based clusters are by default set up with full-mesh link
connectivity between all nodes. Those links are expected to provide
a short failure detection time, by default set to 1500 ms. Because
of this, the background load for neighbor monitoring in an N-node
cluster increases with a factor N on each node, while the overall
monitoring traffic through the network infrastructure increases at
a ~(N * (N - 1)) rate. Experience has shown that such clusters don't
scale well beyond ~100 nodes unless we significantly increase failure
discovery tolerance.
This commit introduces a framework and an algorithm that drastically
reduces this background load, while basically maintaining the original
failure detection times across the whole cluster. Using this algorithm,
background load will now grow at a rate of ~(2 * sqrt(N)) per node, and
at ~(2 * N * sqrt(N)) in traffic overhead. As an example, each node will
now have to actively monitor 38 neighbors in a 400-node cluster, instead
of as before 399.
This "Overlapping Ring Supervision Algorithm" is completely distributed
and employs no centralized or coordinated state. It goes as follows:
- Each node makes up a linearly ascending, circular list of all its N
known neighbors, based on their TIPC node identity. This algorithm
must be the same on all nodes.
- The node then selects the next M = sqrt(N) - 1 nodes downstream from
itself in the list, and chooses to actively monitor those. This is
called its "local monitoring domain".
- It creates a domain record describing the monitoring domain, and
piggy-backs this in the data area of all neighbor monitoring messages
(LINK_PROTOCOL/STATE) leaving that node. This means that all nodes in
the cluster eventually (default within 400 ms) will learn about
its monitoring domain.
- Whenever a node discovers a change in its local domain, e.g., a node
has been added or has gone down, it creates and sends out a new
version of its node record to inform all neighbors about the change.
- A node receiving a domain record from anybody outside its local domain
matches this against its own list (which may not look the same), and
chooses to not actively monitor those members of the received domain
record that are also present in its own list. Instead, it relies on
indications from the direct monitoring nodes if an indirectly
monitored node has gone up or down. If a node is indicated lost, the
receiving node temporarily activates its own direct monitoring towards
that node in order to confirm, or not, that it is actually gone.
- Since each node is actively monitoring sqrt(N) downstream neighbors,
each node is also actively monitored by the same number of upstream
neighbors. This means that all non-direct monitoring nodes normally
will receive sqrt(N) indications that a node is gone.
- A major drawback with ring monitoring is how it handles failures that
cause massive network partitionings. If both a lost node and all its
direct monitoring neighbors are inside the lost partition, the nodes in
the remaining partition will never receive indications about the loss.
To overcome this, each node also chooses to actively monitor some
nodes outside its local domain. Those nodes are called remote domain
"heads", and are selected in such a way that no node in the cluster
will be more than two direct monitoring hops away. Because of this,
each node, apart from monitoring the member of its local domain, will
also typically monitor sqrt(N) remote head nodes.
- As an optimization, local list status, domain status and domain
records are marked with a generation number. This saves senders from
unnecessarily conveying unaltered domain records, and receivers from
performing unneeded re-adaptations of their node monitoring list, such
as re-assigning domain heads.
- As a measure of caution we have added the possibility to disable the
new algorithm through configuration. We do this by keeping a threshold
value for the cluster size; a cluster that grows beyond this value
will switch from full-mesh to ring monitoring, and vice versa when
it shrinks below the value. This means that if the threshold is set to
a value larger than any anticipated cluster size (default size is 32)
the new algorithm is effectively disabled. A patch set for altering the
threshold value and for listing the table contents will follow shortly.
- This change is fully backwards compatible.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-06-14 07:46:22 +07:00
|
|
|
tn->mon_threshold = TIPC_DEF_MON_THRESHOLD;
|
2015-01-09 14:27:12 +07:00
|
|
|
get_random_bytes(&tn->random, sizeof(int));
|
2015-01-09 14:27:05 +07:00
|
|
|
INIT_LIST_HEAD(&tn->node_list);
|
|
|
|
spin_lock_init(&tn->node_list_lock);
|
2015-01-09 14:27:04 +07:00
|
|
|
|
2019-11-08 12:05:11 +07:00
|
|
|
#ifdef CONFIG_TIPC_CRYPTO
|
|
|
|
err = tipc_crypto_start(&tn->crypto_tx, net, NULL);
|
|
|
|
if (err)
|
|
|
|
goto out_crypto;
|
|
|
|
#endif
|
2015-01-09 14:27:08 +07:00
|
|
|
err = tipc_sk_rht_init(net);
|
2015-01-09 14:27:09 +07:00
|
|
|
if (err)
|
|
|
|
goto out_sk_rht;
|
|
|
|
|
|
|
|
err = tipc_nametbl_init(net);
|
|
|
|
if (err)
|
|
|
|
goto out_nametbl;
|
2015-01-09 14:27:11 +07:00
|
|
|
|
2016-04-07 21:40:43 +07:00
|
|
|
INIT_LIST_HEAD(&tn->dist_queue);
|
2015-10-22 19:51:35 +07:00
|
|
|
|
|
|
|
err = tipc_bcast_init(net);
|
|
|
|
if (err)
|
|
|
|
goto out_bclink;
|
|
|
|
|
2019-08-07 09:52:29 +07:00
|
|
|
err = tipc_attach_loopback(net);
|
|
|
|
if (err)
|
|
|
|
goto out_bclink;
|
|
|
|
|
2015-01-09 14:27:09 +07:00
|
|
|
return 0;
|
|
|
|
|
2015-10-22 19:51:35 +07:00
|
|
|
out_bclink:
|
2015-01-09 14:27:11 +07:00
|
|
|
tipc_nametbl_stop(net);
|
2015-01-09 14:27:09 +07:00
|
|
|
out_nametbl:
|
|
|
|
tipc_sk_rht_destroy(net);
|
|
|
|
out_sk_rht:
|
2019-11-08 12:05:11 +07:00
|
|
|
|
|
|
|
#ifdef CONFIG_TIPC_CRYPTO
|
|
|
|
tipc_crypto_stop(&tn->crypto_tx);
|
|
|
|
out_crypto:
|
|
|
|
#endif
|
2015-01-09 14:27:08 +07:00
|
|
|
return err;
|
2015-01-09 14:27:04 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void __net_exit tipc_exit_net(struct net *net)
|
|
|
|
{
|
2020-09-07 13:17:25 +07:00
|
|
|
struct tipc_net *tn = tipc_net(net);
|
|
|
|
|
2019-08-07 09:52:29 +07:00
|
|
|
tipc_detach_loopback(net);
|
2020-09-07 13:17:25 +07:00
|
|
|
/* Make sure the tipc_net_finalize_work() finished */
|
|
|
|
cancel_work_sync(&tn->final_work.work);
|
2015-01-09 14:27:05 +07:00
|
|
|
tipc_net_stop(net);
|
2020-08-27 09:56:51 +07:00
|
|
|
|
2015-10-22 19:51:35 +07:00
|
|
|
tipc_bcast_stop(net);
|
2015-01-09 14:27:09 +07:00
|
|
|
tipc_nametbl_stop(net);
|
2015-01-09 14:27:08 +07:00
|
|
|
tipc_sk_rht_destroy(net);
|
2019-11-08 12:05:11 +07:00
|
|
|
#ifdef CONFIG_TIPC_CRYPTO
|
|
|
|
tipc_crypto_stop(&tipc_net(net)->crypto_tx);
|
|
|
|
#endif
|
2015-01-09 14:27:04 +07:00
|
|
|
}
|
|
|
|
|
tipc: improve throughput between nodes in netns
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.
We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.
To meet any security concerns, the following should be noted.
- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.
- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.
- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.
- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.
- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.
Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.
v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback
Suggested-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-29 07:51:21 +07:00
|
|
|
static void __net_exit tipc_pernet_pre_exit(struct net *net)
|
|
|
|
{
|
|
|
|
tipc_node_pre_cleanup_net(net);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct pernet_operations tipc_pernet_pre_exit_ops = {
|
|
|
|
.pre_exit = tipc_pernet_pre_exit,
|
|
|
|
};
|
|
|
|
|
2015-01-09 14:27:04 +07:00
|
|
|
static struct pernet_operations tipc_net_ops = {
|
|
|
|
.init = tipc_init_net,
|
|
|
|
.exit = tipc_exit_net,
|
|
|
|
.id = &tipc_net_id,
|
|
|
|
.size = sizeof(struct tipc_net),
|
|
|
|
};
|
|
|
|
|
2019-05-20 13:43:59 +07:00
|
|
|
static struct pernet_operations tipc_topsrv_net_ops = {
|
|
|
|
.init = tipc_topsrv_init_net,
|
|
|
|
.exit = tipc_topsrv_exit_net,
|
|
|
|
};
|
|
|
|
|
2015-01-09 14:26:59 +07:00
|
|
|
static int __init tipc_init(void)
|
2006-01-03 01:04:38 +07:00
|
|
|
{
|
2014-02-20 10:32:49 +07:00
|
|
|
int err;
|
2006-01-03 01:04:38 +07:00
|
|
|
|
2015-01-09 14:26:59 +07:00
|
|
|
pr_info("Activated (version " TIPC_MOD_VER ")\n");
|
|
|
|
|
tipc: redesign connection-level flow control
There are two flow control mechanisms in TIPC; one at link level that
handles network congestion, burst control, and retransmission, and one
at connection level which' only remaining task is to prevent overflow
in the receiving socket buffer. In TIPC, the latter task has to be
solved end-to-end because messages can not be thrown away once they
have been accepted and delivered upwards from the link layer, i.e, we
can never permit the receive buffer to overflow.
Currently, this algorithm is message based. A counter in the receiving
socket keeps track of number of consumed messages, and sends a dedicated
acknowledge message back to the sender for each 256 consumed message.
A counter at the sending end keeps track of the sent, not yet
acknowledged messages, and blocks the sender if this number ever reaches
512 unacknowledged messages. When the missing acknowledge arrives, the
socket is then woken up for renewed transmission. This works well for
keeping the message flow running, as it almost never happens that a
sender socket is blocked this way.
A problem with the current mechanism is that it potentially is very
memory consuming. Since we don't distinguish between small and large
messages, we have to dimension the socket receive buffer according
to a worst-case of both. I.e., the window size must be chosen large
enough to sustain a reasonable throughput even for the smallest
messages, while we must still consider a scenario where all messages
are of maximum size. Hence, the current fix window size of 512 messages
and a maximum message size of 66k results in a receive buffer of 66 MB
when truesize(66k) = 131k is taken into account. It is possible to do
much better.
This commit introduces an algorithm where we instead use 1024-byte
blocks as base unit. This unit, always rounded upwards from the
actual message size, is used when we advertise windows as well as when
we count and acknowledge transmitted data. The advertised window is
based on the configured receive buffer size in such a way that even
the worst-case truesize/msgsize ratio always is covered. Since the
smallest possible message size (from a flow control viewpoint) now is
1024 bytes, we can safely assume this ratio to be less than four, which
is the value we are now using.
This way, we have been able to reduce the default receive buffer size
from 66 MB to 2 MB with maintained performance.
In order to keep this solution backwards compatible, we introduce a
new capability bit in the discovery protocol, and use this throughout
the message sending/reception path to always select the right unit.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02 22:58:47 +07:00
|
|
|
sysctl_tipc_rmem[0] = RCVBUF_MIN;
|
|
|
|
sysctl_tipc_rmem[1] = RCVBUF_DEF;
|
|
|
|
sysctl_tipc_rmem[2] = RCVBUF_MAX;
|
2015-01-09 14:26:59 +07:00
|
|
|
|
2014-02-20 10:32:49 +07:00
|
|
|
err = tipc_register_sysctl();
|
|
|
|
if (err)
|
|
|
|
goto out_sysctl;
|
|
|
|
|
2019-06-20 17:39:28 +07:00
|
|
|
err = register_pernet_device(&tipc_net_ops);
|
2014-02-20 10:32:49 +07:00
|
|
|
if (err)
|
2015-01-09 14:27:11 +07:00
|
|
|
goto out_pernet;
|
2014-02-20 10:32:49 +07:00
|
|
|
|
2019-05-18 02:15:05 +07:00
|
|
|
err = tipc_socket_init();
|
|
|
|
if (err)
|
|
|
|
goto out_socket;
|
|
|
|
|
2019-06-20 17:39:28 +07:00
|
|
|
err = register_pernet_device(&tipc_topsrv_net_ops);
|
2019-05-20 13:43:59 +07:00
|
|
|
if (err)
|
|
|
|
goto out_pernet_topsrv;
|
|
|
|
|
tipc: improve throughput between nodes in netns
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.
We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.
To meet any security concerns, the following should be noted.
- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.
- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.
- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.
- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.
- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.
Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.
v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback
Suggested-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-29 07:51:21 +07:00
|
|
|
err = register_pernet_subsys(&tipc_pernet_pre_exit_ops);
|
|
|
|
if (err)
|
|
|
|
goto out_register_pernet_subsys;
|
|
|
|
|
2014-02-20 10:32:50 +07:00
|
|
|
err = tipc_bearer_setup();
|
|
|
|
if (err)
|
|
|
|
goto out_bearer;
|
|
|
|
|
2019-12-06 12:25:48 +07:00
|
|
|
err = tipc_netlink_start();
|
|
|
|
if (err)
|
|
|
|
goto out_netlink;
|
|
|
|
|
|
|
|
err = tipc_netlink_compat_start();
|
|
|
|
if (err)
|
|
|
|
goto out_netlink_compat;
|
|
|
|
|
2015-01-09 14:26:59 +07:00
|
|
|
pr_info("Started in single node mode\n");
|
2014-02-20 10:32:49 +07:00
|
|
|
return 0;
|
2019-12-06 12:25:48 +07:00
|
|
|
|
|
|
|
out_netlink_compat:
|
|
|
|
tipc_netlink_stop();
|
|
|
|
out_netlink:
|
|
|
|
tipc_bearer_cleanup();
|
2014-02-20 10:32:50 +07:00
|
|
|
out_bearer:
|
tipc: improve throughput between nodes in netns
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.
We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.
To meet any security concerns, the following should be noted.
- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.
- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.
- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.
- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.
- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.
Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.
v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback
Suggested-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-29 07:51:21 +07:00
|
|
|
unregister_pernet_subsys(&tipc_pernet_pre_exit_ops);
|
|
|
|
out_register_pernet_subsys:
|
2019-06-20 17:39:28 +07:00
|
|
|
unregister_pernet_device(&tipc_topsrv_net_ops);
|
2019-05-20 13:43:59 +07:00
|
|
|
out_pernet_topsrv:
|
2019-05-18 02:15:05 +07:00
|
|
|
tipc_socket_stop();
|
|
|
|
out_socket:
|
2019-06-20 17:39:28 +07:00
|
|
|
unregister_pernet_device(&tipc_net_ops);
|
2015-01-09 14:27:11 +07:00
|
|
|
out_pernet:
|
2014-02-20 10:32:49 +07:00
|
|
|
tipc_unregister_sysctl();
|
|
|
|
out_sysctl:
|
2015-01-09 14:26:59 +07:00
|
|
|
pr_err("Unable to start in single node mode\n");
|
2014-02-20 10:32:49 +07:00
|
|
|
return err;
|
2006-01-03 01:04:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void __exit tipc_exit(void)
|
|
|
|
{
|
2019-12-06 12:25:48 +07:00
|
|
|
tipc_netlink_compat_stop();
|
|
|
|
tipc_netlink_stop();
|
2015-01-09 14:26:59 +07:00
|
|
|
tipc_bearer_cleanup();
|
tipc: improve throughput between nodes in netns
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.
We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.
To meet any security concerns, the following should be noted.
- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.
- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.
- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.
- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.
- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.
Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.
v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback
Suggested-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-10-29 07:51:21 +07:00
|
|
|
unregister_pernet_subsys(&tipc_pernet_pre_exit_ops);
|
2019-06-20 17:39:28 +07:00
|
|
|
unregister_pernet_device(&tipc_topsrv_net_ops);
|
2019-05-18 02:15:05 +07:00
|
|
|
tipc_socket_stop();
|
2019-06-20 17:39:28 +07:00
|
|
|
unregister_pernet_device(&tipc_net_ops);
|
2015-01-09 14:26:59 +07:00
|
|
|
tipc_unregister_sysctl();
|
|
|
|
|
2012-06-29 11:16:37 +07:00
|
|
|
pr_info("Deactivated\n");
|
2006-01-03 01:04:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(tipc_init);
|
|
|
|
module_exit(tipc_exit);
|
|
|
|
|
|
|
|
MODULE_DESCRIPTION("TIPC: Transparent Inter Process Communication");
|
|
|
|
MODULE_LICENSE("Dual BSD/GPL");
|
2006-06-26 13:42:47 +07:00
|
|
|
MODULE_VERSION(TIPC_MOD_VER);
|