2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
|
|
|
|
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
|
|
|
|
* Copyright (C) 2002 Andi Kleen
|
2008-01-30 19:30:13 +07:00
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* This handles calls from both 32bit and 64bit mode.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/errno.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/smp.h>
|
2015-07-31 04:31:32 +07:00
|
|
|
#include <linux/slab.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/vmalloc.h>
|
2008-12-31 18:12:20 +07:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#include <asm/ldt.h>
|
|
|
|
#include <asm/desc.h>
|
2008-01-30 19:30:13 +07:00
|
|
|
#include <asm/mmu_context.h>
|
2008-07-21 23:04:13 +07:00
|
|
|
#include <asm/syscalls.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-07-26 21:16:30 +07:00
|
|
|
static void refresh_ldt_segments(void)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
unsigned short sel;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure that the cached DS and ES descriptors match the updated
|
|
|
|
* LDT.
|
|
|
|
*/
|
|
|
|
savesegment(ds, sel);
|
|
|
|
if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT)
|
|
|
|
loadsegment(ds, sel);
|
|
|
|
|
|
|
|
savesegment(es, sel);
|
|
|
|
if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT)
|
|
|
|
loadsegment(es, sel);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
/* context.lock is held for us, so we don't need any locking. */
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-29 00:00:15 +07:00
|
|
|
static void flush_ldt(void *__mm)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-29 00:00:15 +07:00
|
|
|
struct mm_struct *mm = __mm;
|
2015-07-31 04:31:32 +07:00
|
|
|
mm_context_t *pc;
|
|
|
|
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-29 00:00:15 +07:00
|
|
|
if (this_cpu_read(cpu_tlbstate.loaded_mm) != mm)
|
2015-07-31 04:31:32 +07:00
|
|
|
return;
|
|
|
|
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-29 00:00:15 +07:00
|
|
|
pc = &mm->context;
|
2017-06-07 00:31:16 +07:00
|
|
|
set_ldt(pc->ldt->entries, pc->ldt->nr_entries);
|
2017-07-26 21:16:30 +07:00
|
|
|
|
|
|
|
refresh_ldt_segments();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
/* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
|
2017-06-07 00:31:16 +07:00
|
|
|
static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2015-07-31 04:31:32 +07:00
|
|
|
struct ldt_struct *new_ldt;
|
2016-12-10 06:13:51 +07:00
|
|
|
unsigned int alloc_size;
|
2015-07-31 04:31:32 +07:00
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
if (num_entries > LDT_ENTRIES)
|
2015-07-31 04:31:32 +07:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
new_ldt = kmalloc(sizeof(struct ldt_struct), GFP_KERNEL);
|
|
|
|
if (!new_ldt)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
BUILD_BUG_ON(LDT_ENTRY_SIZE != sizeof(struct desc_struct));
|
2017-06-07 00:31:16 +07:00
|
|
|
alloc_size = num_entries * LDT_ENTRY_SIZE;
|
2015-07-31 04:31:32 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Xen is very picky: it requires a page-aligned LDT that has no
|
|
|
|
* trailing nonzero bytes in any page that contains LDT descriptors.
|
|
|
|
* Keep it simple: zero the whole allocation and never allocate less
|
|
|
|
* than PAGE_SIZE.
|
|
|
|
*/
|
|
|
|
if (alloc_size > PAGE_SIZE)
|
|
|
|
new_ldt->entries = vzalloc(alloc_size);
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
2015-09-02 22:45:58 +07:00
|
|
|
new_ldt->entries = (void *)get_zeroed_page(GFP_KERNEL);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
if (!new_ldt->entries) {
|
|
|
|
kfree(new_ldt);
|
|
|
|
return NULL;
|
|
|
|
}
|
2008-01-30 19:30:14 +07:00
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
new_ldt->nr_entries = num_entries;
|
2015-07-31 04:31:32 +07:00
|
|
|
return new_ldt;
|
|
|
|
}
|
2008-07-24 04:21:18 +07:00
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
/* After calling this, the LDT is immutable. */
|
|
|
|
static void finalize_ldt_struct(struct ldt_struct *ldt)
|
|
|
|
{
|
2017-06-07 00:31:16 +07:00
|
|
|
paravirt_alloc_ldt(ldt->entries, ldt->nr_entries);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
/* context.lock is held */
|
|
|
|
static void install_ldt(struct mm_struct *current_mm,
|
|
|
|
struct ldt_struct *ldt)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2015-07-31 04:31:32 +07:00
|
|
|
/* Synchronizes with lockless_dereference in load_mm_ldt. */
|
|
|
|
smp_store_release(¤t_mm->context.ldt, ldt);
|
|
|
|
|
|
|
|
/* Activate the LDT for all CPUs using current_mm. */
|
|
|
|
on_each_cpu_mask(mm_cpumask(current_mm), flush_ldt, current_mm, true);
|
|
|
|
}
|
2008-01-30 19:30:13 +07:00
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
static void free_ldt_struct(struct ldt_struct *ldt)
|
|
|
|
{
|
|
|
|
if (likely(!ldt))
|
|
|
|
return;
|
2008-07-24 04:21:18 +07:00
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
paravirt_free_ldt(ldt->entries, ldt->nr_entries);
|
|
|
|
if (ldt->nr_entries * LDT_ENTRY_SIZE > PAGE_SIZE)
|
2016-12-13 07:44:17 +07:00
|
|
|
vfree_atomic(ldt->entries);
|
2015-07-31 04:31:32 +07:00
|
|
|
else
|
2015-09-02 22:45:58 +07:00
|
|
|
free_page((unsigned long)ldt->entries);
|
2015-07-31 04:31:32 +07:00
|
|
|
kfree(ldt);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* we do not have to muck with descriptors here, that is
|
|
|
|
* done in switch_mm() as needed.
|
|
|
|
*/
|
2016-02-13 04:02:34 +07:00
|
|
|
int init_new_context_ldt(struct task_struct *tsk, struct mm_struct *mm)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2015-07-31 04:31:32 +07:00
|
|
|
struct ldt_struct *new_ldt;
|
2008-01-30 19:30:13 +07:00
|
|
|
struct mm_struct *old_mm;
|
2005-04-17 05:20:36 +07:00
|
|
|
int retval = 0;
|
|
|
|
|
2007-10-17 23:04:41 +07:00
|
|
|
mutex_init(&mm->context.lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
old_mm = current->mm;
|
2015-07-31 04:31:32 +07:00
|
|
|
if (!old_mm) {
|
|
|
|
mm->context.ldt = NULL;
|
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2015-07-31 04:31:32 +07:00
|
|
|
|
|
|
|
mutex_lock(&old_mm->context.lock);
|
|
|
|
if (!old_mm->context.ldt) {
|
|
|
|
mm->context.ldt = NULL;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
new_ldt = alloc_ldt_struct(old_mm->context.ldt->nr_entries);
|
2015-07-31 04:31:32 +07:00
|
|
|
if (!new_ldt) {
|
|
|
|
retval = -ENOMEM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(new_ldt->entries, old_mm->context.ldt->entries,
|
2017-06-07 00:31:16 +07:00
|
|
|
new_ldt->nr_entries * LDT_ENTRY_SIZE);
|
2015-07-31 04:31:32 +07:00
|
|
|
finalize_ldt_struct(new_ldt);
|
|
|
|
|
|
|
|
mm->context.ldt = new_ldt;
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&old_mm->context.lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-01-30 19:30:14 +07:00
|
|
|
* No need to lock the MM as we are the last user
|
|
|
|
*
|
|
|
|
* 64bit: Don't touch the LDT register - we're already in the next thread.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2016-02-13 04:02:34 +07:00
|
|
|
void destroy_context_ldt(struct mm_struct *mm)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2015-07-31 04:31:32 +07:00
|
|
|
free_ldt_struct(mm->context.ldt);
|
|
|
|
mm->context.ldt = NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-01-30 19:30:13 +07:00
|
|
|
static int read_ldt(void __user *ptr, unsigned long bytecount)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-01-30 19:30:13 +07:00
|
|
|
struct mm_struct *mm = current->mm;
|
2017-06-07 00:31:16 +07:00
|
|
|
unsigned long entries_size;
|
|
|
|
int retval;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
mutex_lock(&mm->context.lock);
|
|
|
|
|
|
|
|
if (!mm->context.ldt) {
|
|
|
|
retval = 0;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2008-01-30 19:30:13 +07:00
|
|
|
if (bytecount > LDT_ENTRY_SIZE * LDT_ENTRIES)
|
|
|
|
bytecount = LDT_ENTRY_SIZE * LDT_ENTRIES;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
entries_size = mm->context.ldt->nr_entries * LDT_ENTRY_SIZE;
|
|
|
|
if (entries_size > bytecount)
|
|
|
|
entries_size = bytecount;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
if (copy_to_user(ptr, mm->context.ldt->entries, entries_size)) {
|
2015-07-31 04:31:32 +07:00
|
|
|
retval = -EFAULT;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
if (entries_size != bytecount) {
|
2015-07-31 04:31:32 +07:00
|
|
|
/* Zero-fill the rest and pretend we read bytecount bytes. */
|
2017-06-07 00:31:16 +07:00
|
|
|
if (clear_user(ptr + entries_size, bytecount - entries_size)) {
|
2015-07-31 04:31:32 +07:00
|
|
|
retval = -EFAULT;
|
|
|
|
goto out_unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
2015-07-31 04:31:32 +07:00
|
|
|
retval = bytecount;
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&mm->context.lock);
|
|
|
|
return retval;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-01-30 19:30:13 +07:00
|
|
|
static int read_default_ldt(void __user *ptr, unsigned long bytecount)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-01-30 19:30:14 +07:00
|
|
|
/* CHECKME: Can we use _one_ random number ? */
|
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
unsigned long size = 5 * sizeof(struct desc_struct);
|
|
|
|
#else
|
|
|
|
unsigned long size = 128;
|
|
|
|
#endif
|
|
|
|
if (bytecount > size)
|
|
|
|
bytecount = size;
|
2005-04-17 05:20:36 +07:00
|
|
|
if (clear_user(ptr, bytecount))
|
|
|
|
return -EFAULT;
|
2008-01-30 19:30:13 +07:00
|
|
|
return bytecount;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-01-30 19:30:13 +07:00
|
|
|
static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-01-30 19:30:13 +07:00
|
|
|
struct mm_struct *mm = current->mm;
|
2016-12-10 06:13:51 +07:00
|
|
|
struct ldt_struct *new_ldt, *old_ldt;
|
2017-06-07 00:31:16 +07:00
|
|
|
unsigned int old_nr_entries, new_nr_entries;
|
2016-12-10 06:13:51 +07:00
|
|
|
struct user_desc ldt_info;
|
2008-01-30 19:31:13 +07:00
|
|
|
struct desc_struct ldt;
|
2005-04-17 05:20:36 +07:00
|
|
|
int error;
|
|
|
|
|
|
|
|
error = -EINVAL;
|
|
|
|
if (bytecount != sizeof(ldt_info))
|
|
|
|
goto out;
|
2008-01-30 19:30:13 +07:00
|
|
|
error = -EFAULT;
|
2008-01-30 19:30:13 +07:00
|
|
|
if (copy_from_user(&ldt_info, ptr, sizeof(ldt_info)))
|
2005-04-17 05:20:36 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
error = -EINVAL;
|
|
|
|
if (ldt_info.entry_number >= LDT_ENTRIES)
|
|
|
|
goto out;
|
|
|
|
if (ldt_info.contents == 3) {
|
|
|
|
if (oldmode)
|
|
|
|
goto out;
|
|
|
|
if (ldt_info.seg_not_present == 0)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
if ((oldmode && !ldt_info.base_addr && !ldt_info.limit) ||
|
|
|
|
LDT_empty(&ldt_info)) {
|
|
|
|
/* The user wants to clear the entry. */
|
|
|
|
memset(&ldt, 0, sizeof(ldt));
|
|
|
|
} else {
|
|
|
|
if (!IS_ENABLED(CONFIG_X86_16BIT) && !ldt_info.seg_32bit) {
|
|
|
|
error = -EINVAL;
|
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2015-07-31 04:31:32 +07:00
|
|
|
|
|
|
|
fill_ldt(&ldt, &ldt_info);
|
|
|
|
if (oldmode)
|
|
|
|
ldt.avl = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
mutex_lock(&mm->context.lock);
|
|
|
|
|
2017-06-07 00:31:16 +07:00
|
|
|
old_ldt = mm->context.ldt;
|
|
|
|
old_nr_entries = old_ldt ? old_ldt->nr_entries : 0;
|
|
|
|
new_nr_entries = max(ldt_info.entry_number + 1, old_nr_entries);
|
2015-07-31 04:31:32 +07:00
|
|
|
|
|
|
|
error = -ENOMEM;
|
2017-06-07 00:31:16 +07:00
|
|
|
new_ldt = alloc_ldt_struct(new_nr_entries);
|
2015-07-31 04:31:32 +07:00
|
|
|
if (!new_ldt)
|
2014-05-05 00:36:22 +07:00
|
|
|
goto out_unlock;
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
if (old_ldt)
|
2017-06-07 00:31:16 +07:00
|
|
|
memcpy(new_ldt->entries, old_ldt->entries, old_nr_entries * LDT_ENTRY_SIZE);
|
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
new_ldt->entries[ldt_info.entry_number] = ldt;
|
|
|
|
finalize_ldt_struct(new_ldt);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-31 04:31:32 +07:00
|
|
|
install_ldt(mm, new_ldt);
|
|
|
|
free_ldt_struct(old_ldt);
|
2005-04-17 05:20:36 +07:00
|
|
|
error = 0;
|
|
|
|
|
|
|
|
out_unlock:
|
2007-10-17 23:04:41 +07:00
|
|
|
mutex_unlock(&mm->context.lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
out:
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2008-01-30 19:30:13 +07:00
|
|
|
asmlinkage int sys_modify_ldt(int func, void __user *ptr,
|
|
|
|
unsigned long bytecount)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int ret = -ENOSYS;
|
|
|
|
|
|
|
|
switch (func) {
|
|
|
|
case 0:
|
|
|
|
ret = read_ldt(ptr, bytecount);
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
ret = write_ldt(ptr, bytecount, 1);
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
ret = read_default_ldt(ptr, bytecount);
|
|
|
|
break;
|
|
|
|
case 0x11:
|
|
|
|
ret = write_ldt(ptr, bytecount, 0);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|