x86/retpoline: Optimize inline assembler for vmexit_fill_RSB

The generated assembler for the C fill RSB inline asm operations has
several issues:

- The C code sets up the loop register, which is then immediately
  overwritten in __FILL_RETURN_BUFFER with the same value again.

- The C code also passes in the iteration count in another register, which
  is not used at all.

Remove these two unnecessary operations. Just rely on the single constant
passed to the macro for the iterations.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: dave.hansen@intel.com
Cc: gregkh@linuxfoundation.org
Cc: torvalds@linux-foundation.org
Cc: arjan@linux.intel.com
Link: https://lkml.kernel.org/r/20180117225328.15414-1-andi@firstfloor.org
This commit is contained in:
Andi Kleen 2018-01-17 14:53:28 -08:00 committed by Thomas Gleixner
parent 98f0fceec7
commit 3f7d875566

View File

@ -206,16 +206,17 @@ extern char __indirect_thunk_end[];
static inline void vmexit_fill_RSB(void)
{
#ifdef CONFIG_RETPOLINE
unsigned long loops = RSB_CLEAR_LOOPS / 2;
unsigned long loops;
asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE("jmp 910f",
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
X86_FEATURE_RETPOLINE)
"910:"
: "=&r" (loops), ASM_CALL_CONSTRAINT
: "r" (loops) : "memory" );
: "=r" (loops), ASM_CALL_CONSTRAINT
: : "memory" );
#endif
}
#endif /* __ASSEMBLY__ */
#endif /* __NOSPEC_BRANCH_H__ */