License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Central processing for nfsd.
|
|
|
|
*
|
|
|
|
* Authors: Olaf Kirch (okir@monad.swb.de)
|
|
|
|
*
|
|
|
|
* Copyright (C) 1995, 1996, 1997 Olaf Kirch <okir@monad.swb.de>
|
|
|
|
*/
|
|
|
|
|
2017-02-09 00:51:30 +07:00
|
|
|
#include <linux/sched/signal.h>
|
2007-07-17 18:03:35 +07:00
|
|
|
#include <linux/freezer.h>
|
2011-07-02 01:23:34 +07:00
|
|
|
#include <linux/module.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/fs_struct.h>
|
2009-04-03 12:28:18 +07:00
|
|
|
#include <linux/swap.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#include <linux/sunrpc/stats.h>
|
|
|
|
#include <linux/sunrpc/svcsock.h>
|
2015-12-12 04:45:59 +07:00
|
|
|
#include <linux/sunrpc/svc_xprt.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/lockd/bind.h>
|
2005-06-23 00:16:26 +07:00
|
|
|
#include <linux/nfsacl.h>
|
2009-08-15 22:54:41 +07:00
|
|
|
#include <linux/seq_file.h>
|
2015-12-12 04:45:59 +07:00
|
|
|
#include <linux/inetdevice.h>
|
|
|
|
#include <net/addrconf.h>
|
|
|
|
#include <net/ipv6.h>
|
2010-09-29 19:03:50 +07:00
|
|
|
#include <net/net_namespace.h>
|
2009-12-04 01:30:56 +07:00
|
|
|
#include "nfsd.h"
|
|
|
|
#include "cache.h"
|
2009-11-05 06:12:35 +07:00
|
|
|
#include "vfs.h"
|
2012-12-06 18:23:14 +07:00
|
|
|
#include "netns.h"
|
2019-08-19 01:18:48 +07:00
|
|
|
#include "filecache.h"
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#define NFSDDBG_FACILITY NFSDDBG_SVC
|
|
|
|
|
|
|
|
extern struct svc_program nfsd_program;
|
2008-06-10 19:40:38 +07:00
|
|
|
static int nfsd(void *vrqstp);
|
2019-04-09 22:46:18 +07:00
|
|
|
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
|
|
|
|
static int nfsd_acl_rpcbind_set(struct net *,
|
|
|
|
const struct svc_program *,
|
|
|
|
u32, int,
|
|
|
|
unsigned short,
|
|
|
|
unsigned short);
|
2019-04-09 22:46:19 +07:00
|
|
|
static __be32 nfsd_acl_init_request(struct svc_rqst *,
|
|
|
|
const struct svc_program *,
|
|
|
|
struct svc_process_info *);
|
2019-04-09 22:46:18 +07:00
|
|
|
#endif
|
|
|
|
static int nfsd_rpcbind_set(struct net *,
|
|
|
|
const struct svc_program *,
|
|
|
|
u32, int,
|
|
|
|
unsigned short,
|
|
|
|
unsigned short);
|
2019-04-09 22:46:19 +07:00
|
|
|
static __be32 nfsd_init_request(struct svc_rqst *,
|
|
|
|
const struct svc_program *,
|
|
|
|
struct svc_process_info *);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-06-10 19:40:35 +07:00
|
|
|
/*
|
2012-12-06 18:23:24 +07:00
|
|
|
* nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members
|
2008-06-10 19:40:35 +07:00
|
|
|
* of the svc_serv struct. In particular, ->sv_nrthreads but also to some
|
|
|
|
* extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt
|
|
|
|
*
|
2012-12-06 18:23:24 +07:00
|
|
|
* If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
|
2008-06-10 19:40:35 +07:00
|
|
|
* properly initialised 'struct svc_serv' with ->sv_nrthreads > 0. That number
|
|
|
|
* of nfsd threads must exist and each must listed in ->sp_all_threads in each
|
|
|
|
* entry of ->sv_pools[].
|
|
|
|
*
|
|
|
|
* Transitions of the thread count between zero and non-zero are of particular
|
|
|
|
* interest since the svc_serv needs to be created and initialized at that
|
|
|
|
* point, or freed.
|
2008-06-10 19:40:36 +07:00
|
|
|
*
|
|
|
|
* Finally, the nfsd_mutex also protects some of the global variables that are
|
|
|
|
* accessed when nfsd starts and that are settable via the write_* routines in
|
|
|
|
* nfsctl.c. In particular:
|
|
|
|
*
|
|
|
|
* user_recovery_dirname
|
|
|
|
* user_lease_time
|
|
|
|
* nfsd_versions
|
2008-06-10 19:40:35 +07:00
|
|
|
*/
|
|
|
|
DEFINE_MUTEX(nfsd_mutex);
|
|
|
|
|
2009-06-25 02:37:45 +07:00
|
|
|
/*
|
|
|
|
* nfsd_drc_lock protects nfsd_drc_max_pages and nfsd_drc_pages_used.
|
|
|
|
* nfsd_drc_max_pages limits the total amount of memory available for
|
|
|
|
* version 4.1 DRC caches.
|
|
|
|
* nfsd_drc_pages_used tracks the current version 4.1 DRC memory usage.
|
|
|
|
*/
|
|
|
|
spinlock_t nfsd_drc_lock;
|
2013-02-23 07:35:47 +07:00
|
|
|
unsigned long nfsd_drc_max_mem;
|
|
|
|
unsigned long nfsd_drc_mem_used;
|
2009-06-25 02:37:45 +07:00
|
|
|
|
2006-02-01 18:04:34 +07:00
|
|
|
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
|
|
|
|
static struct svc_stat nfsd_acl_svcstats;
|
2017-05-12 21:21:37 +07:00
|
|
|
static const struct svc_version *nfsd_acl_version[] = {
|
2006-02-01 18:04:34 +07:00
|
|
|
[2] = &nfsd_acl_version2,
|
|
|
|
[3] = &nfsd_acl_version3,
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NFSD_ACL_MINVERS 2
|
2006-03-24 18:15:34 +07:00
|
|
|
#define NFSD_ACL_NRVERS ARRAY_SIZE(nfsd_acl_version)
|
2017-05-12 21:21:37 +07:00
|
|
|
static const struct svc_version *nfsd_acl_versions[NFSD_ACL_NRVERS];
|
2006-02-01 18:04:34 +07:00
|
|
|
|
|
|
|
static struct svc_program nfsd_acl_program = {
|
|
|
|
.pg_prog = NFS_ACL_PROGRAM,
|
|
|
|
.pg_nvers = NFSD_ACL_NRVERS,
|
|
|
|
.pg_vers = nfsd_acl_versions,
|
2007-01-26 15:56:58 +07:00
|
|
|
.pg_name = "nfsacl",
|
2006-02-01 18:04:34 +07:00
|
|
|
.pg_class = "nfsd",
|
|
|
|
.pg_stats = &nfsd_acl_svcstats,
|
|
|
|
.pg_authenticate = &svc_set_client,
|
2019-04-09 22:46:19 +07:00
|
|
|
.pg_init_request = nfsd_acl_init_request,
|
2019-04-09 22:46:18 +07:00
|
|
|
.pg_rpcbind_set = nfsd_acl_rpcbind_set,
|
2006-02-01 18:04:34 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct svc_stat nfsd_acl_svcstats = {
|
|
|
|
.program = &nfsd_acl_program,
|
|
|
|
};
|
|
|
|
#endif /* defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) */
|
|
|
|
|
2017-05-12 21:21:37 +07:00
|
|
|
static const struct svc_version *nfsd_version[] = {
|
2005-11-07 16:00:25 +07:00
|
|
|
[2] = &nfsd_version2,
|
|
|
|
#if defined(CONFIG_NFSD_V3)
|
|
|
|
[3] = &nfsd_version3,
|
|
|
|
#endif
|
|
|
|
#if defined(CONFIG_NFSD_V4)
|
|
|
|
[4] = &nfsd_version4,
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NFSD_MINVERS 2
|
2006-03-24 18:15:34 +07:00
|
|
|
#define NFSD_NRVERS ARRAY_SIZE(nfsd_version)
|
2005-11-07 16:00:25 +07:00
|
|
|
|
|
|
|
struct svc_program nfsd_program = {
|
2006-02-01 18:04:34 +07:00
|
|
|
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
|
|
|
|
.pg_next = &nfsd_acl_program,
|
|
|
|
#endif
|
2005-11-07 16:00:25 +07:00
|
|
|
.pg_prog = NFS_PROGRAM, /* program number */
|
|
|
|
.pg_nvers = NFSD_NRVERS, /* nr of entries in nfsd_version */
|
2019-04-09 22:46:19 +07:00
|
|
|
.pg_vers = nfsd_version, /* version table */
|
2005-11-07 16:00:25 +07:00
|
|
|
.pg_name = "nfsd", /* program name */
|
|
|
|
.pg_class = "nfsd", /* authentication class */
|
|
|
|
.pg_stats = &nfsd_svcstats, /* version table */
|
|
|
|
.pg_authenticate = &svc_set_client, /* export authentication */
|
2019-04-09 22:46:19 +07:00
|
|
|
.pg_init_request = nfsd_init_request,
|
2019-04-09 22:46:18 +07:00
|
|
|
.pg_rpcbind_set = nfsd_rpcbind_set,
|
2005-11-07 16:00:25 +07:00
|
|
|
};
|
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
static bool
|
|
|
|
nfsd_support_version(int vers)
|
|
|
|
{
|
|
|
|
if (vers >= NFSD_MINVERS && vers < NFSD_NRVERS)
|
|
|
|
return nfsd_version[vers] != NULL;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool *
|
|
|
|
nfsd_alloc_versions(void)
|
|
|
|
{
|
|
|
|
bool *vers = kmalloc_array(NFSD_NRVERS, sizeof(bool), GFP_KERNEL);
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
if (vers) {
|
|
|
|
/* All compiled versions are enabled by default */
|
|
|
|
for (i = 0; i < NFSD_NRVERS; i++)
|
|
|
|
vers[i] = nfsd_support_version(i);
|
|
|
|
}
|
|
|
|
return vers;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool *
|
|
|
|
nfsd_alloc_minorversions(void)
|
|
|
|
{
|
|
|
|
bool *vers = kmalloc_array(NFSD_SUPPORTED_MINOR_VERSION + 1,
|
|
|
|
sizeof(bool), GFP_KERNEL);
|
|
|
|
unsigned i;
|
2009-04-03 12:28:59 +07:00
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
if (vers) {
|
|
|
|
/* All minor versions are enabled by default */
|
|
|
|
for (i = 0; i <= NFSD_SUPPORTED_MINOR_VERSION; i++)
|
|
|
|
vers[i] = nfsd_support_version(4);
|
|
|
|
}
|
|
|
|
return vers;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
nfsd_netns_free_versions(struct nfsd_net *nn)
|
|
|
|
{
|
|
|
|
kfree(nn->nfsd_versions);
|
|
|
|
kfree(nn->nfsd4_minorversions);
|
|
|
|
nn->nfsd_versions = NULL;
|
|
|
|
nn->nfsd4_minorversions = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
nfsd_netns_init_versions(struct nfsd_net *nn)
|
|
|
|
{
|
|
|
|
if (!nn->nfsd_versions) {
|
|
|
|
nn->nfsd_versions = nfsd_alloc_versions();
|
|
|
|
nn->nfsd4_minorversions = nfsd_alloc_minorversions();
|
|
|
|
if (!nn->nfsd_versions || !nn->nfsd4_minorversions)
|
|
|
|
nfsd_netns_free_versions(nn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change)
|
2006-10-02 16:17:46 +07:00
|
|
|
{
|
|
|
|
if (vers < NFSD_MINVERS || vers >= NFSD_NRVERS)
|
2010-05-14 18:33:36 +07:00
|
|
|
return 0;
|
2006-10-02 16:17:46 +07:00
|
|
|
switch(change) {
|
|
|
|
case NFSD_SET:
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nn->nfsd_versions)
|
|
|
|
nn->nfsd_versions[vers] = nfsd_support_version(vers);
|
2007-01-26 15:56:58 +07:00
|
|
|
break;
|
2006-10-02 16:17:46 +07:00
|
|
|
case NFSD_CLEAR:
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_netns_init_versions(nn);
|
|
|
|
if (nn->nfsd_versions)
|
|
|
|
nn->nfsd_versions[vers] = false;
|
2006-10-02 16:17:46 +07:00
|
|
|
break;
|
|
|
|
case NFSD_TEST:
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nn->nfsd_versions)
|
|
|
|
return nn->nfsd_versions[vers];
|
|
|
|
/* Fallthrough */
|
2006-10-02 16:17:46 +07:00
|
|
|
case NFSD_AVAIL:
|
2019-04-09 22:46:19 +07:00
|
|
|
return nfsd_support_version(vers);
|
2006-10-02 16:17:46 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2009-04-03 12:28:59 +07:00
|
|
|
|
2017-02-23 06:35:32 +07:00
|
|
|
static void
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_adjust_nfsd_versions4(struct nfsd_net *nn)
|
2017-02-23 06:35:32 +07:00
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i <= NFSD_SUPPORTED_MINOR_VERSION; i++) {
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nn->nfsd4_minorversions[i])
|
2017-02-23 06:35:32 +07:00
|
|
|
return;
|
|
|
|
}
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_vers(nn, 4, NFSD_CLEAR);
|
2017-02-23 06:35:32 +07:00
|
|
|
}
|
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change)
|
2009-04-03 12:28:59 +07:00
|
|
|
{
|
2017-03-10 07:36:39 +07:00
|
|
|
if (minorversion > NFSD_SUPPORTED_MINOR_VERSION &&
|
|
|
|
change != NFSD_AVAIL)
|
2009-04-03 12:28:59 +07:00
|
|
|
return -1;
|
2019-04-09 22:46:19 +07:00
|
|
|
|
2009-04-03 12:28:59 +07:00
|
|
|
switch(change) {
|
|
|
|
case NFSD_SET:
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nn->nfsd4_minorversions) {
|
|
|
|
nfsd_vers(nn, 4, NFSD_SET);
|
|
|
|
nn->nfsd4_minorversions[minorversion] =
|
|
|
|
nfsd_vers(nn, 4, NFSD_TEST);
|
|
|
|
}
|
2009-04-03 12:28:59 +07:00
|
|
|
break;
|
|
|
|
case NFSD_CLEAR:
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_netns_init_versions(nn);
|
|
|
|
if (nn->nfsd4_minorversions) {
|
|
|
|
nn->nfsd4_minorversions[minorversion] = false;
|
|
|
|
nfsd_adjust_nfsd_versions4(nn);
|
|
|
|
}
|
2009-04-03 12:28:59 +07:00
|
|
|
break;
|
|
|
|
case NFSD_TEST:
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nn->nfsd4_minorversions)
|
|
|
|
return nn->nfsd4_minorversions[minorversion];
|
|
|
|
return nfsd_vers(nn, 4, NFSD_TEST);
|
2009-04-03 12:28:59 +07:00
|
|
|
case NFSD_AVAIL:
|
2019-04-09 22:46:19 +07:00
|
|
|
return minorversion <= NFSD_SUPPORTED_MINOR_VERSION &&
|
|
|
|
nfsd_vers(nn, 4, NFSD_AVAIL);
|
2009-04-03 12:28:59 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Maximum number of nfsd processes
|
|
|
|
*/
|
|
|
|
#define NFSD_MAXSERVS 8192
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
int nfsd_nrthreads(struct net *net)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-06-12 10:38:42 +07:00
|
|
|
int rv = 0;
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
|
2008-06-12 10:38:42 +07:00
|
|
|
mutex_lock(&nfsd_mutex);
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv)
|
|
|
|
rv = nn->nfsd_serv->sv_nrthreads;
|
2008-06-12 10:38:42 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
|
|
|
return rv;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2019-04-09 23:13:37 +07:00
|
|
|
static int nfsd_init_socks(struct net *net, const struct cred *cred)
|
2010-07-22 05:29:25 +07:00
|
|
|
{
|
|
|
|
int error;
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
|
|
|
|
if (!list_empty(&nn->nfsd_serv->sv_permsocks))
|
2010-07-22 05:29:25 +07:00
|
|
|
return 0;
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
error = svc_create_xprt(nn->nfsd_serv, "udp", net, PF_INET, NFS_PORT,
|
2019-04-09 23:13:37 +07:00
|
|
|
SVC_SOCK_DEFAULTS, cred);
|
2010-07-22 05:29:25 +07:00
|
|
|
if (error < 0)
|
|
|
|
return error;
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
error = svc_create_xprt(nn->nfsd_serv, "tcp", net, PF_INET, NFS_PORT,
|
2019-04-09 23:13:37 +07:00
|
|
|
SVC_SOCK_DEFAULTS, cred);
|
2010-07-22 05:29:25 +07:00
|
|
|
if (error < 0)
|
|
|
|
return error;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:39 +07:00
|
|
|
static int nfsd_users = 0;
|
2010-07-20 03:50:04 +07:00
|
|
|
|
2012-12-06 18:23:29 +07:00
|
|
|
static int nfsd_startup_generic(int nrservs)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2012-12-06 18:23:39 +07:00
|
|
|
if (nfsd_users++)
|
2012-12-06 18:23:29 +07:00
|
|
|
return 0;
|
|
|
|
|
2019-08-19 01:18:48 +07:00
|
|
|
ret = nfsd_file_cache_init();
|
|
|
|
if (ret)
|
|
|
|
goto dec_users;
|
2014-07-30 20:26:05 +07:00
|
|
|
|
2012-12-06 18:23:29 +07:00
|
|
|
ret = nfs4_state_start();
|
|
|
|
if (ret)
|
2019-08-19 01:18:56 +07:00
|
|
|
goto out_file_cache;
|
2012-12-06 18:23:29 +07:00
|
|
|
return 0;
|
|
|
|
|
2019-08-19 01:18:48 +07:00
|
|
|
out_file_cache:
|
|
|
|
nfsd_file_cache_shutdown();
|
2014-07-30 20:26:05 +07:00
|
|
|
dec_users:
|
|
|
|
nfsd_users--;
|
2012-12-06 18:23:29 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nfsd_shutdown_generic(void)
|
|
|
|
{
|
2012-12-06 18:23:39 +07:00
|
|
|
if (--nfsd_users)
|
|
|
|
return;
|
|
|
|
|
2012-12-06 18:23:29 +07:00
|
|
|
nfs4_state_shutdown();
|
2019-08-19 01:18:48 +07:00
|
|
|
nfsd_file_cache_shutdown();
|
2012-12-06 18:23:29 +07:00
|
|
|
}
|
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
static bool nfsd_needs_lockd(struct nfsd_net *nn)
|
2013-12-31 12:17:30 +07:00
|
|
|
{
|
2019-04-09 22:46:19 +07:00
|
|
|
return nfsd_vers(nn, 2, NFSD_TEST) || nfsd_vers(nn, 3, NFSD_TEST);
|
2013-12-31 12:17:30 +07:00
|
|
|
}
|
|
|
|
|
2019-09-03 00:02:56 +07:00
|
|
|
void nfsd_copy_boot_verifier(__be32 verf[2], struct nfsd_net *nn)
|
|
|
|
{
|
|
|
|
int seq = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
read_seqbegin_or_lock(&nn->boot_lock, &seq);
|
|
|
|
/*
|
|
|
|
* This is opaque to client, so no need to byte-swap. Use
|
|
|
|
* __force to keep sparse happy. y2038 time_t overflow is
|
|
|
|
* irrelevant in this usage
|
|
|
|
*/
|
|
|
|
verf[0] = (__force __be32)nn->nfssvc_boot.tv_sec;
|
|
|
|
verf[1] = (__force __be32)nn->nfssvc_boot.tv_nsec;
|
|
|
|
} while (need_seqretry(&nn->boot_lock, seq));
|
|
|
|
done_seqretry(&nn->boot_lock, seq);
|
|
|
|
}
|
|
|
|
|
2019-09-23 12:58:59 +07:00
|
|
|
static void nfsd_reset_boot_verifier_locked(struct nfsd_net *nn)
|
2019-09-03 00:02:56 +07:00
|
|
|
{
|
|
|
|
ktime_get_real_ts64(&nn->nfssvc_boot);
|
|
|
|
}
|
|
|
|
|
|
|
|
void nfsd_reset_boot_verifier(struct nfsd_net *nn)
|
|
|
|
{
|
|
|
|
write_seqlock(&nn->boot_lock);
|
|
|
|
nfsd_reset_boot_verifier_locked(nn);
|
|
|
|
write_sequnlock(&nn->boot_lock);
|
|
|
|
}
|
|
|
|
|
2019-04-09 23:13:37 +07:00
|
|
|
static int nfsd_startup_net(int nrservs, struct net *net, const struct cred *cred)
|
2012-12-06 18:23:09 +07:00
|
|
|
{
|
2012-12-06 18:23:14 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2012-12-06 18:23:09 +07:00
|
|
|
int ret;
|
|
|
|
|
2012-12-06 18:23:14 +07:00
|
|
|
if (nn->nfsd_net_up)
|
|
|
|
return 0;
|
|
|
|
|
2012-12-06 18:23:34 +07:00
|
|
|
ret = nfsd_startup_generic(nrservs);
|
2012-12-06 18:23:09 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2019-04-09 23:13:37 +07:00
|
|
|
ret = nfsd_init_socks(net, cred);
|
2012-12-06 18:23:34 +07:00
|
|
|
if (ret)
|
|
|
|
goto out_socks;
|
2013-12-31 12:17:30 +07:00
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nfsd_needs_lockd(nn) && !nn->lockd_up) {
|
2019-04-09 23:13:39 +07:00
|
|
|
ret = lockd_up(net, cred);
|
2013-12-31 12:17:30 +07:00
|
|
|
if (ret)
|
|
|
|
goto out_socks;
|
|
|
|
nn->lockd_up = 1;
|
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:09 +07:00
|
|
|
ret = nfs4_state_start_net(net);
|
|
|
|
if (ret)
|
|
|
|
goto out_lockd;
|
|
|
|
|
2012-12-06 18:23:14 +07:00
|
|
|
nn->nfsd_net_up = true;
|
2012-12-06 18:23:09 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_lockd:
|
2013-12-31 12:17:30 +07:00
|
|
|
if (nn->lockd_up) {
|
|
|
|
lockd_down(net);
|
|
|
|
nn->lockd_up = 0;
|
|
|
|
}
|
2012-12-06 18:23:34 +07:00
|
|
|
out_socks:
|
2012-12-06 18:23:29 +07:00
|
|
|
nfsd_shutdown_generic();
|
2010-07-20 03:50:04 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:09 +07:00
|
|
|
static void nfsd_shutdown_net(struct net *net)
|
|
|
|
{
|
2012-12-06 18:23:14 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
|
2019-09-03 00:02:55 +07:00
|
|
|
nfsd_file_cache_purge(net);
|
2012-12-06 18:23:09 +07:00
|
|
|
nfs4_state_shutdown_net(net);
|
2013-12-31 12:17:30 +07:00
|
|
|
if (nn->lockd_up) {
|
|
|
|
lockd_down(net);
|
|
|
|
nn->lockd_up = 0;
|
|
|
|
}
|
2012-12-06 18:23:14 +07:00
|
|
|
nn->nfsd_net_up = false;
|
2012-12-06 18:23:34 +07:00
|
|
|
nfsd_shutdown_generic();
|
2012-12-06 18:23:09 +07:00
|
|
|
}
|
|
|
|
|
2015-12-12 04:45:59 +07:00
|
|
|
static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event,
|
|
|
|
void *ptr)
|
|
|
|
{
|
|
|
|
struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
|
|
|
|
struct net_device *dev = ifa->ifa_dev->dev;
|
|
|
|
struct net *net = dev_net(dev);
|
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
struct sockaddr_in sin;
|
|
|
|
|
2017-11-10 14:19:35 +07:00
|
|
|
if ((event != NETDEV_DOWN) ||
|
|
|
|
!atomic_inc_not_zero(&nn->ntf_refcnt))
|
2015-12-12 04:45:59 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (nn->nfsd_serv) {
|
|
|
|
dprintk("nfsd_inetaddr_event: removed %pI4\n", &ifa->ifa_local);
|
|
|
|
sin.sin_family = AF_INET;
|
|
|
|
sin.sin_addr.s_addr = ifa->ifa_local;
|
|
|
|
svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin);
|
|
|
|
}
|
2017-11-10 14:19:35 +07:00
|
|
|
atomic_dec(&nn->ntf_refcnt);
|
|
|
|
wake_up(&nn->ntf_wq);
|
2015-12-12 04:45:59 +07:00
|
|
|
|
|
|
|
out:
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct notifier_block nfsd_inetaddr_notifier = {
|
|
|
|
.notifier_call = nfsd_inetaddr_event,
|
|
|
|
};
|
|
|
|
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
static int nfsd_inet6addr_event(struct notifier_block *this,
|
|
|
|
unsigned long event, void *ptr)
|
|
|
|
{
|
|
|
|
struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
|
|
|
|
struct net_device *dev = ifa->idev->dev;
|
|
|
|
struct net *net = dev_net(dev);
|
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
struct sockaddr_in6 sin6;
|
|
|
|
|
2017-11-10 14:19:35 +07:00
|
|
|
if ((event != NETDEV_DOWN) ||
|
|
|
|
!atomic_inc_not_zero(&nn->ntf_refcnt))
|
2015-12-12 04:45:59 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (nn->nfsd_serv) {
|
|
|
|
dprintk("nfsd_inet6addr_event: removed %pI6\n", &ifa->addr);
|
|
|
|
sin6.sin6_family = AF_INET6;
|
|
|
|
sin6.sin6_addr = ifa->addr;
|
2017-01-06 04:34:49 +07:00
|
|
|
if (ipv6_addr_type(&sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL)
|
|
|
|
sin6.sin6_scope_id = ifa->idev->dev->ifindex;
|
2015-12-12 04:45:59 +07:00
|
|
|
svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin6);
|
|
|
|
}
|
2017-11-10 14:19:35 +07:00
|
|
|
atomic_dec(&nn->ntf_refcnt);
|
|
|
|
wake_up(&nn->ntf_wq);
|
2015-12-12 04:45:59 +07:00
|
|
|
out:
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct notifier_block nfsd_inet6addr_notifier = {
|
|
|
|
.notifier_call = nfsd_inet6addr_event,
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2016-09-21 19:33:05 +07:00
|
|
|
/* Only used under nfsd_mutex, so this atomic may be overkill: */
|
|
|
|
static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
|
|
|
|
|
2012-12-06 18:23:44 +07:00
|
|
|
static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
|
2010-07-20 03:50:04 +07:00
|
|
|
{
|
2012-12-06 18:23:34 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
|
2017-11-10 14:19:35 +07:00
|
|
|
atomic_dec(&nn->ntf_refcnt);
|
2016-09-21 19:33:05 +07:00
|
|
|
/* check if the notifier still has clients */
|
|
|
|
if (atomic_dec_return(&nfsd_notifier_refcount) == 0) {
|
|
|
|
unregister_inetaddr_notifier(&nfsd_inetaddr_notifier);
|
2015-12-12 04:45:59 +07:00
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
2016-09-21 19:33:05 +07:00
|
|
|
unregister_inet6addr_notifier(&nfsd_inet6addr_notifier);
|
2015-12-12 04:45:59 +07:00
|
|
|
#endif
|
2016-09-21 19:33:05 +07:00
|
|
|
}
|
2017-11-10 14:19:35 +07:00
|
|
|
wait_event(nn->ntf_wq, atomic_read(&nn->ntf_refcnt) == 0);
|
2016-09-21 19:33:05 +07:00
|
|
|
|
2010-07-20 03:50:04 +07:00
|
|
|
/*
|
|
|
|
* write_ports can create the server without actually starting
|
|
|
|
* any threads--if we get shut down before any threads are
|
|
|
|
* started, then nfsd_last_thread will be run before any of this
|
2016-01-04 10:15:21 +07:00
|
|
|
* other initialization has been done except the rpcb information.
|
2010-07-20 03:50:04 +07:00
|
|
|
*/
|
2016-01-04 10:15:21 +07:00
|
|
|
svc_rpcb_cleanup(serv, net);
|
2012-12-06 18:23:34 +07:00
|
|
|
if (!nn->nfsd_net_up)
|
2010-07-20 03:50:04 +07:00
|
|
|
return;
|
2011-10-25 18:17:28 +07:00
|
|
|
|
2016-01-04 10:15:21 +07:00
|
|
|
nfsd_shutdown_net(net);
|
2008-06-10 19:40:37 +07:00
|
|
|
printk(KERN_WARNING "nfsd: last server has exited, flushing export "
|
|
|
|
"cache\n");
|
2012-04-11 18:13:21 +07:00
|
|
|
nfsd_export_flush(net);
|
2006-10-02 16:17:44 +07:00
|
|
|
}
|
2006-10-02 16:17:46 +07:00
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
void nfsd_reset_versions(struct nfsd_net *nn)
|
2006-10-02 16:17:46 +07:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2017-03-10 07:36:39 +07:00
|
|
|
for (i = 0; i < NFSD_NRVERS; i++)
|
2019-04-09 22:46:19 +07:00
|
|
|
if (nfsd_vers(nn, i, NFSD_TEST))
|
2017-03-10 07:36:39 +07:00
|
|
|
return;
|
2006-10-02 16:17:46 +07:00
|
|
|
|
2017-03-10 07:36:39 +07:00
|
|
|
for (i = 0; i < NFSD_NRVERS; i++)
|
|
|
|
if (i != 4)
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_vers(nn, i, NFSD_SET);
|
2017-03-10 07:36:39 +07:00
|
|
|
else {
|
|
|
|
int minor = 0;
|
2019-04-09 22:46:19 +07:00
|
|
|
while (nfsd_minorversion(nn, minor, NFSD_SET) >= 0)
|
2017-03-10 07:36:39 +07:00
|
|
|
minor++;
|
|
|
|
}
|
2006-10-02 16:17:46 +07:00
|
|
|
}
|
|
|
|
|
2009-04-03 12:28:18 +07:00
|
|
|
/*
|
|
|
|
* Each session guarantees a negotiated per slot memory cache for replies
|
|
|
|
* which in turn consumes memory beyond the v2/v3/v4.0 server. A dedicated
|
|
|
|
* NFSv4.1 server might want to use more memory for a DRC than a machine
|
|
|
|
* with mutiple services.
|
|
|
|
*
|
|
|
|
* Impose a hard limit on the number of pages for the DRC which varies
|
|
|
|
* according to the machines free pages. This is of course only a default.
|
|
|
|
*
|
|
|
|
* For now this is a #defined shift which could be under admin control
|
|
|
|
* in the future.
|
|
|
|
*/
|
|
|
|
static void set_max_drc(void)
|
|
|
|
{
|
2017-09-20 07:51:31 +07:00
|
|
|
#define NFSD_DRC_SIZE_SHIFT 7
|
2009-07-28 06:09:19 +07:00
|
|
|
nfsd_drc_max_mem = (nr_free_buffer_pages()
|
|
|
|
>> NFSD_DRC_SIZE_SHIFT) * PAGE_SIZE;
|
|
|
|
nfsd_drc_mem_used = 0;
|
2009-06-25 02:37:45 +07:00
|
|
|
spin_lock_init(&nfsd_drc_lock);
|
2013-02-23 07:35:47 +07:00
|
|
|
dprintk("%s nfsd_drc_max_mem %lu \n", __func__, nfsd_drc_max_mem);
|
2009-04-03 12:28:18 +07:00
|
|
|
}
|
2008-06-10 19:40:35 +07:00
|
|
|
|
2012-01-31 04:18:35 +07:00
|
|
|
static int nfsd_get_default_max_blksize(void)
|
2006-10-02 16:17:46 +07:00
|
|
|
{
|
2012-01-31 04:18:35 +07:00
|
|
|
struct sysinfo i;
|
|
|
|
unsigned long long target;
|
|
|
|
unsigned long ret;
|
2008-06-10 19:40:35 +07:00
|
|
|
|
2012-01-31 04:18:35 +07:00
|
|
|
si_meminfo(&i);
|
2012-01-31 04:21:11 +07:00
|
|
|
target = (i.totalram - i.totalhigh) << PAGE_SHIFT;
|
2012-01-31 04:18:35 +07:00
|
|
|
/*
|
|
|
|
* Aim for 1/4096 of memory per thread This gives 1MB on 4Gig
|
|
|
|
* machines, but only uses 32K on 128M machines. Bottom out at
|
|
|
|
* 8K on 32M and smaller. Of course, this is only a default.
|
|
|
|
*/
|
|
|
|
target >>= 12;
|
|
|
|
|
|
|
|
ret = NFSSVC_MAXBLKSIZE;
|
|
|
|
while (ret > target && ret >= 8*1024*2)
|
|
|
|
ret /= 2;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-08-01 23:00:06 +07:00
|
|
|
static const struct svc_serv_ops nfsd_thread_sv_ops = {
|
2015-06-09 02:06:51 +07:00
|
|
|
.svo_shutdown = nfsd_last_thread,
|
|
|
|
.svo_function = nfsd,
|
|
|
|
.svo_enqueue_xprt = svc_xprt_do_enqueue,
|
2015-06-09 02:08:33 +07:00
|
|
|
.svo_setup = svc_set_num_threads,
|
2015-06-09 02:06:51 +07:00
|
|
|
.svo_module = THIS_MODULE,
|
2015-06-09 02:03:32 +07:00
|
|
|
};
|
|
|
|
|
2012-12-10 16:19:20 +07:00
|
|
|
int nfsd_create_serv(struct net *net)
|
2012-01-31 04:18:35 +07:00
|
|
|
{
|
2012-05-02 19:08:38 +07:00
|
|
|
int error;
|
2012-12-06 18:23:19 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2012-05-02 19:08:38 +07:00
|
|
|
|
2008-06-10 19:40:35 +07:00
|
|
|
WARN_ON(!mutex_is_locked(&nfsd_mutex));
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv) {
|
|
|
|
svc_get(nn->nfsd_serv);
|
2006-10-02 16:17:46 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2012-01-31 04:18:35 +07:00
|
|
|
if (nfsd_max_blksize == 0)
|
|
|
|
nfsd_max_blksize = nfsd_get_default_max_blksize();
|
2019-04-09 22:46:19 +07:00
|
|
|
nfsd_reset_versions(nn);
|
2012-12-06 18:23:24 +07:00
|
|
|
nn->nfsd_serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize,
|
2015-06-09 02:06:51 +07:00
|
|
|
&nfsd_thread_sv_ops);
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv == NULL)
|
2010-07-22 03:40:08 +07:00
|
|
|
return -ENOMEM;
|
2008-06-10 19:40:35 +07:00
|
|
|
|
2014-07-03 03:11:22 +07:00
|
|
|
nn->nfsd_serv->sv_maxconn = nn->max_connections;
|
2012-12-06 18:23:24 +07:00
|
|
|
error = svc_bind(nn->nfsd_serv, net);
|
2012-05-02 19:08:38 +07:00
|
|
|
if (error < 0) {
|
2012-12-06 18:23:24 +07:00
|
|
|
svc_destroy(nn->nfsd_serv);
|
2012-05-02 19:08:38 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2010-07-22 03:40:08 +07:00
|
|
|
set_max_drc();
|
2016-09-21 19:33:05 +07:00
|
|
|
/* check if the notifier is already set */
|
|
|
|
if (atomic_inc_return(&nfsd_notifier_refcount) == 1) {
|
|
|
|
register_inetaddr_notifier(&nfsd_inetaddr_notifier);
|
2015-12-12 04:45:59 +07:00
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
2016-09-21 19:33:05 +07:00
|
|
|
register_inet6addr_notifier(&nfsd_inet6addr_notifier);
|
2015-12-12 04:45:59 +07:00
|
|
|
#endif
|
2016-09-21 19:33:05 +07:00
|
|
|
}
|
2017-11-10 14:19:35 +07:00
|
|
|
atomic_inc(&nn->ntf_refcnt);
|
2019-09-03 00:02:56 +07:00
|
|
|
nfsd_reset_boot_verifier(nn);
|
2012-01-31 04:18:35 +07:00
|
|
|
return 0;
|
2006-10-02 16:17:46 +07:00
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
int nfsd_nrpools(struct net *net)
|
2006-10-02 16:18:02 +07:00
|
|
|
{
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
|
|
|
|
if (nn->nfsd_serv == NULL)
|
2006-10-02 16:18:02 +07:00
|
|
|
return 0;
|
|
|
|
else
|
2012-12-06 18:23:24 +07:00
|
|
|
return nn->nfsd_serv->sv_nrpools;
|
2006-10-02 16:18:02 +07:00
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
|
2006-10-02 16:18:02 +07:00
|
|
|
{
|
|
|
|
int i = 0;
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2006-10-02 16:18:02 +07:00
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv != NULL) {
|
|
|
|
for (i = 0; i < nn->nfsd_serv->sv_nrpools && i < n; i++)
|
|
|
|
nthreads[i] = nn->nfsd_serv->sv_pools[i].sp_nrthreads;
|
2006-10-02 16:18:02 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
void nfsd_destroy(struct net *net)
|
|
|
|
{
|
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
|
|
|
int destroy = (nn->nfsd_serv->sv_nrthreads == 1);
|
|
|
|
|
|
|
|
if (destroy)
|
|
|
|
svc_shutdown_net(nn->nfsd_serv, net);
|
|
|
|
svc_destroy(nn->nfsd_serv);
|
|
|
|
if (destroy)
|
|
|
|
nn->nfsd_serv = NULL;
|
|
|
|
}
|
|
|
|
|
2012-12-10 16:19:30 +07:00
|
|
|
int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
|
2006-10-02 16:18:02 +07:00
|
|
|
{
|
|
|
|
int i = 0;
|
|
|
|
int tot = 0;
|
|
|
|
int err = 0;
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2006-10-02 16:18:02 +07:00
|
|
|
|
2008-06-10 19:40:35 +07:00
|
|
|
WARN_ON(!mutex_is_locked(&nfsd_mutex));
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv == NULL || n <= 0)
|
2006-10-02 16:18:02 +07:00
|
|
|
return 0;
|
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
if (n > nn->nfsd_serv->sv_nrpools)
|
|
|
|
n = nn->nfsd_serv->sv_nrpools;
|
2006-10-02 16:18:02 +07:00
|
|
|
|
|
|
|
/* enforce a global maximum number of threads */
|
|
|
|
tot = 0;
|
|
|
|
for (i = 0; i < n; i++) {
|
2014-06-10 17:08:19 +07:00
|
|
|
nthreads[i] = min(nthreads[i], NFSD_MAXSERVS);
|
2006-10-02 16:18:02 +07:00
|
|
|
tot += nthreads[i];
|
|
|
|
}
|
|
|
|
if (tot > NFSD_MAXSERVS) {
|
|
|
|
/* total too large: scale down requested numbers */
|
|
|
|
for (i = 0; i < n && tot > 0; i++) {
|
|
|
|
int new = nthreads[i] * NFSD_MAXSERVS / tot;
|
|
|
|
tot -= (nthreads[i] - new);
|
|
|
|
nthreads[i] = new;
|
|
|
|
}
|
|
|
|
for (i = 0; i < n && tot > 0; i++) {
|
|
|
|
nthreads[i]--;
|
|
|
|
tot--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* There must always be a thread in pool 0; the admin
|
|
|
|
* can't shut down NFS completely using pool_threads.
|
|
|
|
*/
|
|
|
|
if (nthreads[0] == 0)
|
|
|
|
nthreads[0] = 1;
|
|
|
|
|
|
|
|
/* apply the new numbers */
|
2012-12-06 18:23:24 +07:00
|
|
|
svc_get(nn->nfsd_serv);
|
2006-10-02 16:18:02 +07:00
|
|
|
for (i = 0; i < n; i++) {
|
2015-06-09 02:08:33 +07:00
|
|
|
err = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
|
|
|
|
&nn->nfsd_serv->sv_pools[i], nthreads[i]);
|
2006-10-02 16:18:02 +07:00
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
2012-07-03 19:46:41 +07:00
|
|
|
nfsd_destroy(net);
|
2006-10-02 16:18:02 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2010-07-21 01:10:22 +07:00
|
|
|
/*
|
|
|
|
* Adjust the number of threads and return the new number of threads.
|
|
|
|
* This is also the function that starts the server if necessary, if
|
|
|
|
* this is the first time nrservs is nonzero.
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
int
|
2019-04-09 23:13:37 +07:00
|
|
|
nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int error;
|
2010-08-03 01:12:44 +07:00
|
|
|
bool nfsd_up_before;
|
2012-12-06 18:23:24 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2008-06-10 19:40:35 +07:00
|
|
|
|
|
|
|
mutex_lock(&nfsd_mutex);
|
2006-10-02 16:17:46 +07:00
|
|
|
dprintk("nfsd: creating service\n");
|
2014-06-10 17:08:19 +07:00
|
|
|
|
|
|
|
nrservs = max(nrservs, 0);
|
|
|
|
nrservs = min(nrservs, NFSD_MAXSERVS);
|
2009-06-16 08:03:20 +07:00
|
|
|
error = 0;
|
2014-06-10 17:08:19 +07:00
|
|
|
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nrservs == 0 && nn->nfsd_serv == NULL)
|
2009-06-16 08:03:20 +07:00
|
|
|
goto out;
|
|
|
|
|
2012-12-10 16:19:20 +07:00
|
|
|
error = nfsd_create_serv(net);
|
2006-10-02 16:17:46 +07:00
|
|
|
if (error)
|
2010-08-03 01:12:44 +07:00
|
|
|
goto out;
|
|
|
|
|
2012-12-06 18:23:34 +07:00
|
|
|
nfsd_up_before = nn->nfsd_net_up;
|
2010-08-03 01:12:44 +07:00
|
|
|
|
2019-04-09 23:13:37 +07:00
|
|
|
error = nfsd_startup_net(nrservs, net, cred);
|
2010-07-22 05:31:42 +07:00
|
|
|
if (error)
|
|
|
|
goto out_destroy;
|
2015-06-09 02:08:33 +07:00
|
|
|
error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
|
|
|
|
NULL, nrservs);
|
2010-08-03 01:12:44 +07:00
|
|
|
if (error)
|
|
|
|
goto out_shutdown;
|
2012-12-06 18:23:24 +07:00
|
|
|
/* We are holding a reference to nn->nfsd_serv which
|
2010-07-22 05:31:42 +07:00
|
|
|
* we don't want to count in the return value,
|
|
|
|
* so subtract 1
|
|
|
|
*/
|
2012-12-06 18:23:24 +07:00
|
|
|
error = nn->nfsd_serv->sv_nrthreads - 1;
|
2010-07-20 03:50:04 +07:00
|
|
|
out_shutdown:
|
2010-08-03 01:12:44 +07:00
|
|
|
if (error < 0 && !nfsd_up_before)
|
2012-12-06 18:23:44 +07:00
|
|
|
nfsd_shutdown_net(net);
|
2010-08-03 01:12:44 +07:00
|
|
|
out_destroy:
|
2012-07-03 19:46:41 +07:00
|
|
|
nfsd_destroy(net); /* Release server */
|
2010-07-20 03:50:04 +07:00
|
|
|
out:
|
2008-06-10 19:40:35 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2019-04-09 22:46:18 +07:00
|
|
|
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
|
|
|
|
static bool
|
|
|
|
nfsd_support_acl_version(int vers)
|
|
|
|
{
|
|
|
|
if (vers >= NFSD_ACL_MINVERS && vers < NFSD_ACL_NRVERS)
|
|
|
|
return nfsd_acl_version[vers] != NULL;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
nfsd_acl_rpcbind_set(struct net *net, const struct svc_program *progp,
|
|
|
|
u32 version, int family, unsigned short proto,
|
|
|
|
unsigned short port)
|
|
|
|
{
|
|
|
|
if (!nfsd_support_acl_version(version) ||
|
2019-04-09 22:46:19 +07:00
|
|
|
!nfsd_vers(net_generic(net, nfsd_net_id), version, NFSD_TEST))
|
2019-04-09 22:46:18 +07:00
|
|
|
return 0;
|
|
|
|
return svc_generic_rpcbind_set(net, progp, version, family,
|
|
|
|
proto, port);
|
|
|
|
}
|
2019-04-09 22:46:19 +07:00
|
|
|
|
|
|
|
static __be32
|
|
|
|
nfsd_acl_init_request(struct svc_rqst *rqstp,
|
|
|
|
const struct svc_program *progp,
|
|
|
|
struct svc_process_info *ret)
|
|
|
|
{
|
|
|
|
struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (likely(nfsd_support_acl_version(rqstp->rq_vers) &&
|
|
|
|
nfsd_vers(nn, rqstp->rq_vers, NFSD_TEST)))
|
|
|
|
return svc_generic_init_request(rqstp, progp, ret);
|
|
|
|
|
|
|
|
ret->mismatch.lovers = NFSD_ACL_NRVERS;
|
|
|
|
for (i = NFSD_ACL_MINVERS; i < NFSD_ACL_NRVERS; i++) {
|
|
|
|
if (nfsd_support_acl_version(rqstp->rq_vers) &&
|
|
|
|
nfsd_vers(nn, i, NFSD_TEST)) {
|
|
|
|
ret->mismatch.lovers = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (ret->mismatch.lovers == NFSD_ACL_NRVERS)
|
|
|
|
return rpc_prog_unavail;
|
|
|
|
ret->mismatch.hivers = NFSD_ACL_MINVERS;
|
|
|
|
for (i = NFSD_ACL_NRVERS - 1; i >= NFSD_ACL_MINVERS; i--) {
|
|
|
|
if (nfsd_support_acl_version(rqstp->rq_vers) &&
|
|
|
|
nfsd_vers(nn, i, NFSD_TEST)) {
|
|
|
|
ret->mismatch.hivers = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return rpc_prog_mismatch;
|
|
|
|
}
|
2019-04-09 22:46:18 +07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
static int
|
|
|
|
nfsd_rpcbind_set(struct net *net, const struct svc_program *progp,
|
|
|
|
u32 version, int family, unsigned short proto,
|
|
|
|
unsigned short port)
|
|
|
|
{
|
2019-04-09 22:46:19 +07:00
|
|
|
if (!nfsd_vers(net_generic(net, nfsd_net_id), version, NFSD_TEST))
|
2019-04-09 22:46:18 +07:00
|
|
|
return 0;
|
|
|
|
return svc_generic_rpcbind_set(net, progp, version, family,
|
|
|
|
proto, port);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2019-04-09 22:46:19 +07:00
|
|
|
static __be32
|
|
|
|
nfsd_init_request(struct svc_rqst *rqstp,
|
|
|
|
const struct svc_program *progp,
|
|
|
|
struct svc_process_info *ret)
|
|
|
|
{
|
|
|
|
struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (likely(nfsd_vers(nn, rqstp->rq_vers, NFSD_TEST)))
|
|
|
|
return svc_generic_init_request(rqstp, progp, ret);
|
|
|
|
|
|
|
|
ret->mismatch.lovers = NFSD_NRVERS;
|
|
|
|
for (i = NFSD_MINVERS; i < NFSD_NRVERS; i++) {
|
|
|
|
if (nfsd_vers(nn, i, NFSD_TEST)) {
|
|
|
|
ret->mismatch.lovers = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (ret->mismatch.lovers == NFSD_NRVERS)
|
|
|
|
return rpc_prog_unavail;
|
|
|
|
ret->mismatch.hivers = NFSD_MINVERS;
|
|
|
|
for (i = NFSD_NRVERS - 1; i >= NFSD_MINVERS; i--) {
|
|
|
|
if (nfsd_vers(nn, i, NFSD_TEST)) {
|
|
|
|
ret->mismatch.hivers = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return rpc_prog_mismatch;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* This is the NFS server kernel thread
|
|
|
|
*/
|
2008-06-10 19:40:38 +07:00
|
|
|
static int
|
|
|
|
nfsd(void *vrqstp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-06-10 19:40:38 +07:00
|
|
|
struct svc_rqst *rqstp = (struct svc_rqst *) vrqstp;
|
2012-12-06 22:34:42 +07:00
|
|
|
struct svc_xprt *perm_sock = list_entry(rqstp->rq_server->sv_permsocks.next, typeof(struct svc_xprt), xpt_list);
|
|
|
|
struct net *net = perm_sock->xpt_net;
|
2014-07-03 03:11:22 +07:00
|
|
|
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
2012-08-18 08:47:53 +07:00
|
|
|
int err;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Lock module and set up kernel thread */
|
2008-06-10 19:40:35 +07:00
|
|
|
mutex_lock(&nfsd_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-06-10 19:40:38 +07:00
|
|
|
/* At this point, the thread shares current->fs
|
2016-01-13 02:24:14 +07:00
|
|
|
* with the init process. We need to create files with the
|
|
|
|
* umask as defined by the client instead of init's umask. */
|
2009-03-30 06:00:13 +07:00
|
|
|
if (unshare_fs_struct() < 0) {
|
2005-04-17 05:20:36 +07:00
|
|
|
printk("Unable to start nfsd thread: out of memory\n");
|
|
|
|
goto out;
|
|
|
|
}
|
2009-03-30 06:00:13 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
current->fs->umask = 0;
|
|
|
|
|
2008-06-10 19:40:38 +07:00
|
|
|
/*
|
|
|
|
* thread is spawned with all signals set to SIG_IGN, re-enable
|
2008-07-01 01:09:46 +07:00
|
|
|
* the ones that will bring down the thread
|
2008-06-10 19:40:38 +07:00
|
|
|
*/
|
2008-07-01 01:09:46 +07:00
|
|
|
allow_signal(SIGKILL);
|
|
|
|
allow_signal(SIGHUP);
|
|
|
|
allow_signal(SIGINT);
|
|
|
|
allow_signal(SIGQUIT);
|
2008-06-10 19:40:35 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
nfsdstats.th_cnt++;
|
2008-06-10 19:40:35 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
|
|
|
|
2007-07-17 18:03:35 +07:00
|
|
|
set_freezable();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The main request loop
|
|
|
|
*/
|
|
|
|
for (;;) {
|
2014-07-03 03:11:22 +07:00
|
|
|
/* Update sv_maxconn if it has changed */
|
|
|
|
rqstp->rq_server->sv_maxconn = nn->max_connections;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Find a socket with data available and call its
|
|
|
|
* recvfrom routine.
|
|
|
|
*/
|
2006-10-02 16:17:50 +07:00
|
|
|
while ((err = svc_recv(rqstp, 60*60*HZ)) == -EAGAIN)
|
2005-04-17 05:20:36 +07:00
|
|
|
;
|
2008-06-10 19:40:38 +07:00
|
|
|
if (err == -EINTR)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2009-09-02 15:13:40 +07:00
|
|
|
validate_process_creds();
|
2006-10-02 16:17:50 +07:00
|
|
|
svc_process(rqstp);
|
2009-09-02 15:13:40 +07:00
|
|
|
validate_process_creds();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2006-10-02 16:17:45 +07:00
|
|
|
/* Clear signals before calling svc_exit_thread() */
|
2005-04-17 05:26:37 +07:00
|
|
|
flush_signals(current);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-06-10 19:40:35 +07:00
|
|
|
mutex_lock(&nfsd_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
nfsdstats.th_cnt --;
|
|
|
|
|
|
|
|
out:
|
2012-07-03 19:46:41 +07:00
|
|
|
rqstp->rq_server = NULL;
|
2012-05-04 15:49:41 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Release the thread */
|
|
|
|
svc_exit_thread(rqstp);
|
|
|
|
|
2012-12-06 22:34:42 +07:00
|
|
|
nfsd_destroy(net);
|
2012-07-03 19:46:41 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Release module */
|
2008-06-10 19:40:35 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
module_put_and_exit(0);
|
2008-06-10 19:40:38 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2007-07-17 18:04:48 +07:00
|
|
|
static __be32 map_new_errors(u32 vers, __be32 nfserr)
|
|
|
|
{
|
|
|
|
if (nfserr == nfserr_jukebox && vers == 2)
|
|
|
|
return nfserr_dropit;
|
|
|
|
if (nfserr == nfserr_wrongsec && vers < 4)
|
|
|
|
return nfserr_acces;
|
|
|
|
return nfserr;
|
|
|
|
}
|
|
|
|
|
nfsd: check for oversized NFSv2/v3 arguments
A client can append random data to the end of an NFSv2 or NFSv3 RPC call
without our complaining; we'll just stop parsing at the end of the
expected data and ignore the rest.
Encoded arguments and replies are stored together in an array of pages,
and if a call is too large it could leave inadequate space for the
reply. This is normally OK because NFS RPC's typically have either
short arguments and long replies (like READ) or long arguments and short
replies (like WRITE). But a client that sends an incorrectly long reply
can violate those assumptions. This was observed to cause crashes.
Also, several operations increment rq_next_page in the decode routine
before checking the argument size, which can leave rq_next_page pointing
well past the end of the page array, causing trouble later in
svc_free_pages.
So, following a suggestion from Neil Brown, add a central check to
enforce our expectation that no NFSv2/v3 call has both a large call and
a large reply.
As followup we may also want to rewrite the encoding routines to check
more carefully that they aren't running off the end of the page array.
We may also consider rejecting calls that have any extra garbage
appended. That would be safer, and within our rights by spec, but given
the age of our server and the NFS protocol, and the fact that we've
never enforced this before, we may need to balance that against the
possibility of breaking some oddball client.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Cc: stable@vger.kernel.org
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-04-22 03:10:18 +07:00
|
|
|
/*
|
|
|
|
* A write procedure can have a large argument, and a read procedure can
|
|
|
|
* have a large reply, but no NFSv2 or NFSv3 procedure has argument and
|
|
|
|
* reply that can both be larger than a page. The xdr code has taken
|
|
|
|
* advantage of this assumption to be a sloppy about bounds checking in
|
|
|
|
* some cases. Pending a rewrite of the NFSv2/v3 xdr code to fix that
|
|
|
|
* problem, we enforce these assumptions here:
|
|
|
|
*/
|
|
|
|
static bool nfs_request_too_big(struct svc_rqst *rqstp,
|
2017-05-12 21:11:49 +07:00
|
|
|
const struct svc_procedure *proc)
|
nfsd: check for oversized NFSv2/v3 arguments
A client can append random data to the end of an NFSv2 or NFSv3 RPC call
without our complaining; we'll just stop parsing at the end of the
expected data and ignore the rest.
Encoded arguments and replies are stored together in an array of pages,
and if a call is too large it could leave inadequate space for the
reply. This is normally OK because NFS RPC's typically have either
short arguments and long replies (like READ) or long arguments and short
replies (like WRITE). But a client that sends an incorrectly long reply
can violate those assumptions. This was observed to cause crashes.
Also, several operations increment rq_next_page in the decode routine
before checking the argument size, which can leave rq_next_page pointing
well past the end of the page array, causing trouble later in
svc_free_pages.
So, following a suggestion from Neil Brown, add a central check to
enforce our expectation that no NFSv2/v3 call has both a large call and
a large reply.
As followup we may also want to rewrite the encoding routines to check
more carefully that they aren't running off the end of the page array.
We may also consider rejecting calls that have any extra garbage
appended. That would be safer, and within our rights by spec, but given
the age of our server and the NFS protocol, and the fact that we've
never enforced this before, we may need to balance that against the
possibility of breaking some oddball client.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Cc: stable@vger.kernel.org
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-04-22 03:10:18 +07:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The ACL code has more careful bounds-checking and is not
|
|
|
|
* susceptible to this problem:
|
|
|
|
*/
|
|
|
|
if (rqstp->rq_prog != NFS_PROGRAM)
|
|
|
|
return false;
|
|
|
|
/*
|
|
|
|
* Ditto NFSv4 (which can in theory have argument and reply both
|
|
|
|
* more than a page):
|
|
|
|
*/
|
|
|
|
if (rqstp->rq_vers >= 4)
|
|
|
|
return false;
|
|
|
|
/* The reply will be small, we're OK: */
|
|
|
|
if (proc->pc_xdrressize > 0 &&
|
|
|
|
proc->pc_xdrressize < XDR_QUADLEN(PAGE_SIZE))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return rqstp->rq_arg.len > PAGE_SIZE;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
int
|
2006-10-20 13:29:02 +07:00
|
|
|
nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2017-05-12 21:11:49 +07:00
|
|
|
const struct svc_procedure *proc;
|
2006-10-20 13:28:55 +07:00
|
|
|
__be32 nfserr;
|
|
|
|
__be32 *nfserrp;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
dprintk("nfsd_dispatch: vers %d proc %d\n",
|
|
|
|
rqstp->rq_vers, rqstp->rq_proc);
|
|
|
|
proc = rqstp->rq_procinfo;
|
|
|
|
|
nfsd: check for oversized NFSv2/v3 arguments
A client can append random data to the end of an NFSv2 or NFSv3 RPC call
without our complaining; we'll just stop parsing at the end of the
expected data and ignore the rest.
Encoded arguments and replies are stored together in an array of pages,
and if a call is too large it could leave inadequate space for the
reply. This is normally OK because NFS RPC's typically have either
short arguments and long replies (like READ) or long arguments and short
replies (like WRITE). But a client that sends an incorrectly long reply
can violate those assumptions. This was observed to cause crashes.
Also, several operations increment rq_next_page in the decode routine
before checking the argument size, which can leave rq_next_page pointing
well past the end of the page array, causing trouble later in
svc_free_pages.
So, following a suggestion from Neil Brown, add a central check to
enforce our expectation that no NFSv2/v3 call has both a large call and
a large reply.
As followup we may also want to rewrite the encoding routines to check
more carefully that they aren't running off the end of the page array.
We may also consider rejecting calls that have any extra garbage
appended. That would be safer, and within our rights by spec, but given
the age of our server and the NFS protocol, and the fact that we've
never enforced this before, we may need to balance that against the
possibility of breaking some oddball client.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Cc: stable@vger.kernel.org
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-04-22 03:10:18 +07:00
|
|
|
if (nfs_request_too_big(rqstp, proc)) {
|
|
|
|
dprintk("nfsd: NFSv%d argument too large\n", rqstp->rq_vers);
|
|
|
|
*statp = rpc_garbage_args;
|
|
|
|
return 1;
|
|
|
|
}
|
2011-01-25 00:11:02 +07:00
|
|
|
/*
|
|
|
|
* Give the xdr decoder a chance to change this if it wants
|
|
|
|
* (necessary in the NFSv4.0 compound case)
|
|
|
|
*/
|
|
|
|
rqstp->rq_cachetype = proc->pc_cachetype;
|
|
|
|
/* Decode arguments */
|
2017-05-09 00:01:48 +07:00
|
|
|
if (proc->pc_decode &&
|
|
|
|
!proc->pc_decode(rqstp, (__be32*)rqstp->rq_arg.head[0].iov_base)) {
|
2011-01-25 00:11:02 +07:00
|
|
|
dprintk("nfsd: failed to decode arguments!\n");
|
|
|
|
*statp = rpc_garbage_args;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Check whether we have this call in the cache. */
|
2011-01-25 00:11:02 +07:00
|
|
|
switch (nfsd_cache_lookup(rqstp)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
case RC_DROPIT:
|
|
|
|
return 0;
|
|
|
|
case RC_REPLY:
|
|
|
|
return 1;
|
|
|
|
case RC_DOIT:;
|
|
|
|
/* do it */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* need to grab the location to store the status, as
|
|
|
|
* nfsv4 does some encoding while processing
|
|
|
|
*/
|
|
|
|
nfserrp = rqstp->rq_res.head[0].iov_base
|
|
|
|
+ rqstp->rq_res.head[0].iov_len;
|
2006-10-20 13:28:55 +07:00
|
|
|
rqstp->rq_res.head[0].iov_len += sizeof(__be32);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Now call the procedure handler, and encode NFS status. */
|
2017-05-08 22:35:49 +07:00
|
|
|
nfserr = proc->pc_func(rqstp);
|
2007-07-17 18:04:48 +07:00
|
|
|
nfserr = map_new_errors(rqstp->rq_vers, nfserr);
|
2014-11-19 19:51:17 +07:00
|
|
|
if (nfserr == nfserr_dropit || test_bit(RQ_DROPME, &rqstp->rq_flags)) {
|
2007-06-23 04:26:32 +07:00
|
|
|
dprintk("nfsd: Dropping request; may be revisited later\n");
|
2005-04-17 05:20:36 +07:00
|
|
|
nfsd_cache_update(rqstp, RC_NOCACHE, NULL);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rqstp->rq_proc != 0)
|
|
|
|
*nfserrp++ = nfserr;
|
|
|
|
|
|
|
|
/* Encode result.
|
|
|
|
* For NFSv2, additional info is never returned in case of an error.
|
|
|
|
*/
|
|
|
|
if (!(nfserr && rqstp->rq_vers == 2)) {
|
2017-05-09 00:42:02 +07:00
|
|
|
if (proc->pc_encode && !proc->pc_encode(rqstp, nfserrp)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Failed to encode result. Release cache entry */
|
|
|
|
dprintk("nfsd: failed to encode result!\n");
|
|
|
|
nfsd_cache_update(rqstp, RC_NOCACHE, NULL);
|
|
|
|
*statp = rpc_system_err;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Store reply in cache. */
|
2012-11-17 03:22:43 +07:00
|
|
|
nfsd_cache_update(rqstp, rqstp->rq_cachetype, statp + 1);
|
2005-04-17 05:20:36 +07:00
|
|
|
return 1;
|
|
|
|
}
|
2009-01-13 17:26:36 +07:00
|
|
|
|
|
|
|
int nfsd_pool_stats_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2009-08-15 22:54:41 +07:00
|
|
|
int ret;
|
2013-02-01 19:56:12 +07:00
|
|
|
struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
|
2012-12-06 18:23:24 +07:00
|
|
|
|
2009-08-15 22:54:41 +07:00
|
|
|
mutex_lock(&nfsd_mutex);
|
2012-12-06 18:23:24 +07:00
|
|
|
if (nn->nfsd_serv == NULL) {
|
2009-08-15 22:54:41 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
2009-01-13 17:26:36 +07:00
|
|
|
return -ENODEV;
|
2009-08-15 22:54:41 +07:00
|
|
|
}
|
|
|
|
/* bump up the psudo refcount while traversing */
|
2012-12-06 18:23:24 +07:00
|
|
|
svc_get(nn->nfsd_serv);
|
|
|
|
ret = svc_pool_stats_open(nn->nfsd_serv, file);
|
2009-08-15 22:54:41 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int nfsd_pool_stats_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
int ret = seq_release(inode, file);
|
2013-02-01 19:56:12 +07:00
|
|
|
struct net *net = inode->i_sb->s_fs_info;
|
2012-05-04 15:49:41 +07:00
|
|
|
|
2009-08-15 22:54:41 +07:00
|
|
|
mutex_lock(&nfsd_mutex);
|
|
|
|
/* this function really, really should have been called svc_put() */
|
2012-07-03 19:46:41 +07:00
|
|
|
nfsd_destroy(net);
|
2009-08-15 22:54:41 +07:00
|
|
|
mutex_unlock(&nfsd_mutex);
|
|
|
|
return ret;
|
2009-01-13 17:26:36 +07:00
|
|
|
}
|