x86/fsgsbase: Revert FSGSBASE support

The FSGSBASE series turned out to have serious bugs and there is still an
open issue which is not fully understood yet.

The confidence in those changes has become close to zero especially as the
test cases which have been shipped with that series were obviously never
run before sending the final series out to LKML.

  ./fsgsbase_64 >/dev/null
  Segmentation fault

As the merge window is close, the only sane decision is to revert FSGSBASE
support. The revert is necessary as this branch has been merged into
perf/core already and rebasing all of that a few days before the merge
window is not the most brilliant idea.

I could definitely slap myself for not noticing the test case fail when
merging that series, but TBH my expectations weren't that low back
then. Won't happen again.

Revert the following commits:
539bca535d ("x86/entry/64: Fix and clean up paranoid_exit")
2c7b5ac5d5 ("Documentation/x86/64: Add documentation for GS/FS addressing mode")
f987c955c7 ("x86/elf: Enumerate kernel FSGSBASE capability in AT_HWCAP2")
2032f1f96e ("x86/cpu: Enable FSGSBASE on 64bit by default and add a chicken bit")
5bf0cab60e ("x86/entry/64: Document GSBASE handling in the paranoid path")
708078f657 ("x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit")
79e1932fa3 ("x86/entry/64: Introduce the FIND_PERCPU_BASE macro")
1d07316b13 ("x86/entry/64: Switch CR3 before SWAPGS in paranoid entry")
f60a83df45 ("x86/process/64: Use FSGSBASE instructions on thread copy and ptrace")
1ab5f3f7fe ("x86/process/64: Use FSBSBASE in switch_to() if available")
a86b462513 ("x86/fsgsbase/64: Enable FSGSBASE instructions in helper functions")
8b71340d70 ("x86/fsgsbase/64: Add intrinsics for FSGSBASE instructions")
b64ed19b93 ("x86/cpu: Add 'unsafe_fsgsbase' to enable CR4.FSGSBASE")

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Chang S. Bae <chang.seok.bae@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ravi Shankar <ravi.v.shankar@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
This commit is contained in:
Thomas Gleixner 2019-07-03 14:19:36 +02:00
parent 697096b144
commit 049331f277
11 changed files with 57 additions and 544 deletions

View File

@ -2857,8 +2857,6 @@
no5lvl [X86-64] Disable 5-level paging mode. Forces
kernel to use 4-level paging instead.
nofsgsbase [X86] Disables FSGSBASE instructions.
no_console_suspend
[HW] Never suspend the console
Disable suspending of consoles during suspend and

View File

@ -108,12 +108,3 @@ We try to only use IST entries and the paranoid entry code for vectors
that absolutely need the more expensive check for the GS base - and we
generate all 'normal' entry points with the regular (faster) paranoid=0
variant.
On a FSGSBASE system, however, user space can set GS without kernel
interaction. It means the value of GS base itself does not imply anything,
whether a kernel value or a user space value. So, there is no longer a safe
way to check whether the exception is entering from user mode or kernel
mode in the paranoid entry code path. So the GSBASE value needs to be read
out, saved and the kernel GSBASE value written. On exit the saved GSBASE
value needs to be restored unconditionally. The non paranoid entry/exit
code still uses SWAPGS unconditionally as the state is known.

View File

@ -1,199 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
Using FS and GS segments in user space applications
===================================================
The x86 architecture supports segmentation. Instructions which access
memory can use segment register based addressing mode. The following
notation is used to address a byte within a segment:
Segment-register:Byte-address
The segment base address is added to the Byte-address to compute the
resulting virtual address which is accessed. This allows to access multiple
instances of data with the identical Byte-address, i.e. the same code. The
selection of a particular instance is purely based on the base-address in
the segment register.
In 32-bit mode the CPU provides 6 segments, which also support segment
limits. The limits can be used to enforce address space protections.
In 64-bit mode the CS/SS/DS/ES segments are ignored and the base address is
always 0 to provide a full 64bit address space. The FS and GS segments are
still functional in 64-bit mode.
Common FS and GS usage
------------------------------
The FS segment is commonly used to address Thread Local Storage (TLS). FS
is usually managed by runtime code or a threading library. Variables
declared with the '__thread' storage class specifier are instantiated per
thread and the compiler emits the FS: address prefix for accesses to these
variables. Each thread has its own FS base address so common code can be
used without complex address offset calculations to access the per thread
instances. Applications should not use FS for other purposes when they use
runtimes or threading libraries which manage the per thread FS.
The GS segment has no common use and can be used freely by
applications. GCC and Clang support GS based addressing via address space
identifiers.
Reading and writing the FS/GS base address
------------------------------------------
There exist two mechanisms to read and write the FS/FS base address:
- the arch_prctl() system call
- the FSGSBASE instruction family
Accessing FS/GS base with arch_prctl()
--------------------------------------
The arch_prctl(2) based mechanism is available on all 64bit CPUs and all
kernel versions.
Reading the base:
arch_prctl(ARCH_GET_FS, &fsbase);
arch_prctl(ARCH_GET_GS, &gsbase);
Writing the base:
arch_prctl(ARCH_SET_FS, fsbase);
arch_prctl(ARCH_SET_GS, gsbase);
The ARCH_SET_GS prctl may be disabled depending on kernel configuration
and security settings.
Accessing FS/GS base with the FSGSBASE instructions
---------------------------------------------------
With the Ivy Bridge CPU generation Intel introduced a new set of
instructions to access the FS and GS base registers directly from user
space. These instructions are also supported on AMD Family 17H CPUs. The
following instructions are available:
=============== ===========================
RDFSBASE %reg Read the FS base register
RDGSBASE %reg Read the GS base register
WRFSBASE %reg Write the FS base register
WRGSBASE %reg Write the GS base register
=============== ===========================
The instructions avoid the overhead of the arch_prctl() syscall and allow
more flexible usage of the FS/GS addressing modes in user space
applications. This does not prevent conflicts between threading libraries
and runtimes which utilize FS and applications which want to use it for
their own purpose.
FSGSBASE instructions enablement
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The instructions are enumerated in CPUID leaf 7, bit 0 of EBX. If
available /proc/cpuinfo shows 'fsgsbase' in the flag entry of the CPUs.
The availability of the instructions does not enable them
automatically. The kernel has to enable them explicitly in CR4. The
reason for this is that older kernels make assumptions about the values in
the GS register and enforce them when GS base is set via
arch_prctl(). Allowing user space to write arbitrary values to GS base
would violate these assumptions and cause malfunction.
On kernels which do not enable FSGSBASE the execution of the FSGSBASE
instructions will fault with a #UD exception.
The kernel provides reliable information about the enabled state in the
ELF AUX vector. If the HWCAP2_FSGSBASE bit is set in the AUX vector, the
kernel has FSGSBASE instructions enabled and applications can use them.
The following code example shows how this detection works::
#include <sys/auxv.h>
#include <elf.h>
/* Will be eventually in asm/hwcap.h */
#ifndef HWCAP2_FSGSBASE
#define HWCAP2_FSGSBASE (1 << 1)
#endif
....
unsigned val = getauxval(AT_HWCAP2);
if (val & HWCAP2_FSGSBASE)
printf("FSGSBASE enabled\n");
FSGSBASE instructions compiler support
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
GCC version 4.6.4 and newer provide instrinsics for the FSGSBASE
instructions. Clang supports them as well.
=================== ===========================
_readfsbase_u64() Read the FS base register
_readfsbase_u64() Read the GS base register
_writefsbase_u64() Write the FS base register
_writegsbase_u64() Write the GS base register
=================== ===========================
To utilize these instrinsics <immintrin.h> must be included in the source
code and the compiler option -mfsgsbase has to be added.
Compiler support for FS/GS based addressing
-------------------------------------------
GCC version 6 and newer provide support for FS/GS based addressing via
Named Address Spaces. GCC implements the following address space
identifiers for x86:
========= ====================================
__seg_fs Variable is addressed relative to FS
__seg_gs Variable is addressed relative to GS
========= ====================================
The preprocessor symbols __SEG_FS and __SEG_GS are defined when these
address spaces are supported. Code which implements fallback modes should
check whether these symbols are defined. Usage example::
#ifdef __SEG_GS
long data0 = 0;
long data1 = 1;
long __seg_gs *ptr;
/* Check whether FSGSBASE is enabled by the kernel (HWCAP2_FSGSBASE) */
....
/* Set GS to point to data0 */
_writegsbase_u64(&data0);
/* Access offset 0 of GS */
ptr = 0;
printf("data0 = %ld\n", *ptr);
/* Set GS to point to data1 */
_writegsbase_u64(&data1);
/* ptr still addresses offset 0! */
printf("data1 = %ld\n", *ptr);
Clang does not provide the GCC address space identifiers, but it provides
address spaces via an attribute based mechanism in Clang 5 and newer
versions:
==================================== =====================================
__attribute__((address_space(256)) Variable is addressed relative to GS
__attribute__((address_space(257)) Variable is addressed relative to FS
==================================== =====================================
FS/GS based addressing with inline assembly
-------------------------------------------
In case the compiler does not support address spaces, inline assembly can
be used for FS/GS based addressing mode::
mov %fs:offset, %reg
mov %gs:offset, %reg
mov %reg, %fs:offset
mov %reg, %gs:offset

View File

@ -14,4 +14,3 @@ x86_64 Support
fake-numa-for-cpusets
cpu-hotplug-spec
machinecheck
fsgs

View File

@ -6,7 +6,6 @@
#include <asm/percpu.h>
#include <asm/asm-offsets.h>
#include <asm/processor-flags.h>
#include <asm/inst.h>
/*
@ -338,12 +337,6 @@ For 32-bit we have the following conventions - kernel is built with
#endif
.endm
.macro SAVE_AND_SET_GSBASE scratch_reg:req save_reg:req
rdgsbase \save_reg
GET_PERCPU_BASE \scratch_reg
wrgsbase \scratch_reg
.endm
#endif /* CONFIG_X86_64 */
.macro STACKLEAK_ERASE
@ -352,39 +345,6 @@ For 32-bit we have the following conventions - kernel is built with
#endif
.endm
#ifdef CONFIG_SMP
/*
* CPU/node NR is loaded from the limit (size) field of a special segment
* descriptor entry in GDT.
*/
.macro LOAD_CPU_AND_NODE_SEG_LIMIT reg:req
movq $__CPUNODE_SEG, \reg
lsl \reg, \reg
.endm
/*
* Fetch the per-CPU GSBASE value for this processor and put it in @reg.
* We normally use %gs for accessing per-CPU data, but we are setting up
* %gs here and obviously can not use %gs itself to access per-CPU data.
*/
.macro GET_PERCPU_BASE reg:req
ALTERNATIVE \
"LOAD_CPU_AND_NODE_SEG_LIMIT \reg", \
"RDPID \reg", \
X86_FEATURE_RDPID
andq $VDSO_CPUNODE_MASK, \reg
movq __per_cpu_offset(, \reg, 8), \reg
.endm
#else
.macro GET_PERCPU_BASE reg:req
movq pcpu_unit_offsets(%rip), \reg
.endm
#endif /* CONFIG_SMP */
/*
* This does 'call enter_from_user_mode' unless we can avoid it based on
* kernel config or using the static jump infrastructure.

View File

@ -38,7 +38,6 @@
#include <asm/export.h>
#include <asm/frame.h>
#include <asm/nospec-branch.h>
#include <asm/fsgsbase.h>
#include <linux/err.h>
#include "calling.h"
@ -948,6 +947,7 @@ ENTRY(\sym)
addq $\ist_offset, CPU_TSS_IST(\shift_ist)
.endif
/* these procedures expect "no swapgs" flag in ebx */
.if \paranoid
jmp paranoid_exit
.else
@ -1164,21 +1164,24 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
#endif
/*
* Save all registers in pt_regs. Return GSBASE related information
* in EBX depending on the availability of the FSGSBASE instructions:
*
* FSGSBASE R/EBX
* N 0 -> SWAPGS on exit
* 1 -> no SWAPGS on exit
*
* Y GSBASE value at entry, must be restored in paranoid_exit
* Save all registers in pt_regs, and switch gs if needed.
* Use slow, but surefire "are we in kernel?" check.
* Return: ebx=0: need swapgs on exit, ebx=1: otherwise
*/
ENTRY(paranoid_entry)
UNWIND_HINT_FUNC
cld
PUSH_AND_CLEAR_REGS save_ret=1
ENCODE_FRAME_POINTER 8
movl $1, %ebx
movl $MSR_GS_BASE, %ecx
rdmsr
testl %edx, %edx
js 1f /* negative -> in kernel */
SWAPGS
xorl %ebx, %ebx
1:
/*
* Always stash CR3 in %r14. This value will be restored,
* verbatim, at exit. Needed if paranoid_entry interrupted
@ -1188,49 +1191,9 @@ ENTRY(paranoid_entry)
* This is also why CS (stashed in the "iret frame" by the
* hardware at entry) can not be used: this may be a return
* to kernel code, but with a user CR3 value.
*
* Switching CR3 does not depend on kernel GSBASE so it can
* be done before switching to the kernel GSBASE. This is
* required for FSGSBASE because the kernel GSBASE has to
* be retrieved from a kernel internal table.
*/
SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
/*
* Handling GSBASE depends on the availability of FSGSBASE.
*
* Without FSGSBASE the kernel enforces that negative GSBASE
* values indicate kernel GSBASE. With FSGSBASE no assumptions
* can be made about the GSBASE value when entering from user
* space.
*/
ALTERNATIVE "jmp .Lparanoid_entry_checkgs", "", X86_FEATURE_FSGSBASE
/*
* Read the current GSBASE and store it in in %rbx unconditionally,
* retrieve and set the current CPUs kernel GSBASE. The stored value
* has to be restored in paranoid_exit unconditionally.
*/
SAVE_AND_SET_GSBASE scratch_reg=%rax save_reg=%rbx
ret
.Lparanoid_entry_checkgs:
/* EBX = 1 -> kernel GSBASE active, no restore required */
movl $1, %ebx
/*
* The kernel-enforced convention is a negative GSBASE indicates
* a kernel value. No SWAPGS needed on entry and exit.
*/
movl $MSR_GS_BASE, %ecx
rdmsr
testl %edx, %edx
jns .Lparanoid_entry_swapgs
ret
.Lparanoid_entry_swapgs:
SWAPGS
/* EBX = 0 -> SWAPGS required on exit */
xorl %ebx, %ebx
ret
END(paranoid_entry)
@ -1241,48 +1204,28 @@ END(paranoid_entry)
*
* We may be returning to very strange contexts (e.g. very early
* in syscall entry), so checking for preemption here would
* be complicated. Fortunately, there's no good reason to try
* to handle preemption here.
* be complicated. Fortunately, we there's no good reason
* to try to handle preemption here.
*
* R/EBX contains the GSBASE related information depending on the
* availability of the FSGSBASE instructions:
*
* FSGSBASE R/EBX
* N 0 -> SWAPGS on exit
* 1 -> no SWAPGS on exit
*
* Y User space GSBASE, must be restored unconditionally
* On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
*/
ENTRY(paranoid_exit)
UNWIND_HINT_REGS
DISABLE_INTERRUPTS(CLBR_ANY)
/*
* The order of operations is important. IRQ tracing requires
* kernel GSBASE and CR3. RESTORE_CR3 requires kernel GS base.
*
* NB to anyone to tries to optimize this code: this code does
* not execute at all for exceptions coming from user mode. Those
* exceptions go through error_exit instead.
*/
TRACE_IRQS_IRETQ_DEBUG
RESTORE_CR3 scratch_reg=%rax save_reg=%r14
/* Handle the three GSBASE cases. */
ALTERNATIVE "jmp .Lparanoid_exit_checkgs", "", X86_FEATURE_FSGSBASE
/* With FSGSBASE enabled, unconditionally restore GSBASE */
wrgsbase %rbx
jmp restore_regs_and_return_to_kernel
.Lparanoid_exit_checkgs:
/* On non-FSGSBASE systems, conditionally do SWAPGS */
testl %ebx, %ebx
jnz restore_regs_and_return_to_kernel
/* We are returning to a context with user GSBASE. */
TRACE_IRQS_OFF_DEBUG
testl %ebx, %ebx /* swapgs needed? */
jnz .Lparanoid_exit_no_swapgs
TRACE_IRQS_IRETQ
/* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
SWAPGS_UNSAFE_STACK
jmp restore_regs_and_return_to_kernel
jmp .Lparanoid_exit_restore
.Lparanoid_exit_no_swapgs:
TRACE_IRQS_IRETQ_DEBUG
/* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
.Lparanoid_exit_restore:
jmp restore_regs_and_return_to_kernel
END(paranoid_exit)
/*
@ -1693,27 +1636,10 @@ end_repeat_nmi:
/* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%r15 save_reg=%r14
/*
* The above invocation of paranoid_entry stored the GSBASE
* related information in R/EBX depending on the availability
* of FSGSBASE.
*
* If FSGSBASE is enabled, restore the saved GSBASE value
* unconditionally, otherwise take the conditional SWAPGS path.
*/
ALTERNATIVE "jmp nmi_no_fsgsbase", "", X86_FEATURE_FSGSBASE
wrgsbase %rbx
jmp nmi_restore
nmi_no_fsgsbase:
/* EBX == 0 -> invoke SWAPGS */
testl %ebx, %ebx
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
SWAPGS_UNSAFE_STACK
nmi_restore:
POP_REGS

View File

@ -19,62 +19,35 @@ extern unsigned long x86_gsbase_read_task(struct task_struct *task);
extern void x86_fsbase_write_task(struct task_struct *task, unsigned long fsbase);
extern void x86_gsbase_write_task(struct task_struct *task, unsigned long gsbase);
/* Must be protected by X86_FEATURE_FSGSBASE check. */
static __always_inline unsigned long rdfsbase(void)
{
unsigned long fsbase;
asm volatile("rdfsbase %0" : "=r" (fsbase) :: "memory");
return fsbase;
}
static __always_inline unsigned long rdgsbase(void)
{
unsigned long gsbase;
asm volatile("rdgsbase %0" : "=r" (gsbase) :: "memory");
return gsbase;
}
static __always_inline void wrfsbase(unsigned long fsbase)
{
asm volatile("wrfsbase %0" :: "r" (fsbase) : "memory");
}
static __always_inline void wrgsbase(unsigned long gsbase)
{
asm volatile("wrgsbase %0" :: "r" (gsbase) : "memory");
}
#include <asm/cpufeature.h>
/* Helper functions for reading/writing FS/GS base */
static inline unsigned long x86_fsbase_read_cpu(void)
{
unsigned long fsbase;
if (static_cpu_has(X86_FEATURE_FSGSBASE))
fsbase = rdfsbase();
else
rdmsrl(MSR_FS_BASE, fsbase);
rdmsrl(MSR_FS_BASE, fsbase);
return fsbase;
}
static inline void x86_fsbase_write_cpu(unsigned long fsbase)
static inline unsigned long x86_gsbase_read_cpu_inactive(void)
{
if (static_cpu_has(X86_FEATURE_FSGSBASE))
wrfsbase(fsbase);
else
wrmsrl(MSR_FS_BASE, fsbase);
unsigned long gsbase;
rdmsrl(MSR_KERNEL_GS_BASE, gsbase);
return gsbase;
}
extern unsigned long x86_gsbase_read_cpu_inactive(void);
extern void x86_gsbase_write_cpu_inactive(unsigned long gsbase);
static inline void x86_fsbase_write_cpu(unsigned long fsbase)
{
wrmsrl(MSR_FS_BASE, fsbase);
}
static inline void x86_gsbase_write_cpu_inactive(unsigned long gsbase)
{
wrmsrl(MSR_KERNEL_GS_BASE, gsbase);
}
#endif /* CONFIG_X86_64 */

View File

@ -306,21 +306,6 @@
.endif
MODRM 0xc0 movq_r64_xmm_opd1 movq_r64_xmm_opd2
.endm
.macro RDPID opd
REG_TYPE rdpid_opd_type \opd
.if rdpid_opd_type == REG_TYPE_R64
R64_NUM rdpid_opd \opd
.else
R32_NUM rdpid_opd \opd
.endif
.byte 0xf3
.if rdpid_opd > 7
PFX_REX rdpid_opd 0
.endif
.byte 0x0f, 0xc7
MODRM 0xc0 rdpid_opd 0x7
.endm
#endif
#endif

View File

@ -5,7 +5,4 @@
/* MONITOR/MWAIT enabled in Ring 3 */
#define HWCAP2_RING3MWAIT (1 << 0)
/* Kernel allows FSGSBASE instructions available in Ring 3 */
#define HWCAP2_FSGSBASE BIT(1)
#endif

View File

@ -366,22 +366,6 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c)
cr4_clear_bits(X86_CR4_UMIP);
}
static __init int x86_nofsgsbase_setup(char *arg)
{
/* Require an exact match without trailing characters. */
if (strlen(arg))
return 0;
/* Do not emit a message if the feature is not present. */
if (!boot_cpu_has(X86_FEATURE_FSGSBASE))
return 1;
setup_clear_cpu_cap(X86_FEATURE_FSGSBASE);
pr_info("FSGSBASE disabled via kernel command line\n");
return 1;
}
__setup("nofsgsbase", x86_nofsgsbase_setup);
/*
* Protection Keys are not available in 32-bit mode.
*/
@ -1386,12 +1370,6 @@ static void identify_cpu(struct cpuinfo_x86 *c)
setup_smap(c);
setup_umip(c);
/* Enable FSGSBASE instructions if available. */
if (cpu_has(c, X86_FEATURE_FSGSBASE)) {
cr4_set_bits(X86_CR4_FSGSBASE);
elf_hwcap2 |= HWCAP2_FSGSBASE;
}
/*
* The vendor-specific functions might have changed features.
* Now we do "generic changes."

View File

@ -161,40 +161,6 @@ enum which_selector {
GS
};
/*
* Out of line to be protected from kprobes. It is not used on Xen
* paravirt. When paravirt support is needed, it needs to be renamed
* with native_ prefix.
*/
static noinline unsigned long __rdgsbase_inactive(void)
{
unsigned long gsbase;
lockdep_assert_irqs_disabled();
native_swapgs();
gsbase = rdgsbase();
native_swapgs();
return gsbase;
}
NOKPROBE_SYMBOL(__rdgsbase_inactive);
/*
* Out of line to be protected from kprobes. It is not used on Xen
* paravirt. When paravirt support is needed, it needs to be renamed
* with native_ prefix.
*/
static noinline void __wrgsbase_inactive(unsigned long gsbase)
{
lockdep_assert_irqs_disabled();
native_swapgs();
wrgsbase(gsbase);
native_swapgs();
}
NOKPROBE_SYMBOL(__wrgsbase_inactive);
/*
* Saves the FS or GS base for an outgoing thread if FSGSBASE extensions are
* not available. The goal is to be reasonably fast on non-FSGSBASE systems.
@ -244,22 +210,8 @@ static __always_inline void save_fsgs(struct task_struct *task)
{
savesegment(fs, task->thread.fsindex);
savesegment(gs, task->thread.gsindex);
if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
unsigned long flags;
/*
* If FSGSBASE is enabled, we can't make any useful guesses
* about the base, and user code expects us to save the current
* value. Fortunately, reading the base directly is efficient.
*/
task->thread.fsbase = rdfsbase();
local_irq_save(flags);
task->thread.gsbase = __rdgsbase_inactive();
local_irq_restore(flags);
} else {
save_base_legacy(task, task->thread.fsindex, FS);
save_base_legacy(task, task->thread.gsindex, GS);
}
save_base_legacy(task, task->thread.fsindex, FS);
save_base_legacy(task, task->thread.gsindex, GS);
}
#if IS_ENABLED(CONFIG_KVM)
@ -338,22 +290,10 @@ static __always_inline void load_seg_legacy(unsigned short prev_index,
static __always_inline void x86_fsgsbase_load(struct thread_struct *prev,
struct thread_struct *next)
{
if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
/* Update the FS and GS selectors if they could have changed. */
if (unlikely(prev->fsindex || next->fsindex))
loadseg(FS, next->fsindex);
if (unlikely(prev->gsindex || next->gsindex))
loadseg(GS, next->gsindex);
/* Update the bases. */
wrfsbase(next->fsbase);
__wrgsbase_inactive(next->gsbase);
} else {
load_seg_legacy(prev->fsindex, prev->fsbase,
next->fsindex, next->fsbase, FS);
load_seg_legacy(prev->gsindex, prev->gsbase,
next->gsindex, next->gsbase, GS);
}
load_seg_legacy(prev->fsindex, prev->fsbase,
next->fsindex, next->fsbase, FS);
load_seg_legacy(prev->gsindex, prev->gsbase,
next->gsindex, next->gsbase, GS);
}
static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
@ -399,46 +339,13 @@ static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
return base;
}
unsigned long x86_gsbase_read_cpu_inactive(void)
{
unsigned long gsbase;
if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
unsigned long flags;
/* Interrupts are disabled here. */
local_irq_save(flags);
gsbase = __rdgsbase_inactive();
local_irq_restore(flags);
} else {
rdmsrl(MSR_KERNEL_GS_BASE, gsbase);
}
return gsbase;
}
void x86_gsbase_write_cpu_inactive(unsigned long gsbase)
{
if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
unsigned long flags;
/* Interrupts are disabled here. */
local_irq_save(flags);
__wrgsbase_inactive(gsbase);
local_irq_restore(flags);
} else {
wrmsrl(MSR_KERNEL_GS_BASE, gsbase);
}
}
unsigned long x86_fsbase_read_task(struct task_struct *task)
{
unsigned long fsbase;
if (task == current)
fsbase = x86_fsbase_read_cpu();
else if (static_cpu_has(X86_FEATURE_FSGSBASE) ||
(task->thread.fsindex == 0))
else if (task->thread.fsindex == 0)
fsbase = task->thread.fsbase;
else
fsbase = x86_fsgsbase_read_task(task, task->thread.fsindex);
@ -452,8 +359,7 @@ unsigned long x86_gsbase_read_task(struct task_struct *task)
if (task == current)
gsbase = x86_gsbase_read_cpu_inactive();
else if (static_cpu_has(X86_FEATURE_FSGSBASE) ||
(task->thread.gsindex == 0))
else if (task->thread.gsindex == 0)
gsbase = task->thread.gsbase;
else
gsbase = x86_fsgsbase_read_task(task, task->thread.gsindex);
@ -493,11 +399,10 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
p->thread.sp = (unsigned long) fork_frame;
p->thread.io_bitmap_ptr = NULL;
save_fsgs(me);
p->thread.fsindex = me->thread.fsindex;
p->thread.fsbase = me->thread.fsbase;
p->thread.gsindex = me->thread.gsindex;
p->thread.gsbase = me->thread.gsbase;
savesegment(gs, p->thread.gsindex);
p->thread.gsbase = p->thread.gsindex ? 0 : me->thread.gsbase;
savesegment(fs, p->thread.fsindex);
p->thread.fsbase = p->thread.fsindex ? 0 : me->thread.fsbase;
savesegment(es, p->thread.es);
savesegment(ds, p->thread.ds);
memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));