mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-30 10:46:42 +07:00
5f7dc5d750
Unlike other alphas, marvel doesn't have real PC-style CMOS clock hardware - RTC accesses are emulated via PAL calls. Unfortunately, for unknown reason these calls work only on CPU #0. So current implementation for arbitrary CPU makes CMOS_READ/WRITE to be executed on CPU #0 via IPI. However, for obvious reason this doesn't work with standard get/set_rtc_time() functions, where a bunch of CMOS accesses is done with disabled interrupts. Solved by making the IPI calls for entire get/set_rtc_time() functions, not for individual CMOS accesses. Which is also a lot more effective performance-wise. The patch is largely based on the code from Jay Estabrook. My changes: - tweak asm-generic/rtc.h by adding a couple of #defines to avoid a massive code duplication in arch/alpha/include/asm/rtc.h; - sys_marvel.c: fix get/set_rtc_time() return values (Jay's FIXMEs). NOTE: this fixes *only* LIB_RTC drivers. Legacy (CONFIG_RTC) driver wont't work on marvel. Actually I think that we should just disable CONFIG_RTC on alpha (maybe in 2.6.30?), like most other arches - AFAIK, all modern distributions use LIB_RTC anyway. Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
17 lines
371 B
C
17 lines
371 B
C
#ifndef _ALPHA_RTC_H
|
|
#define _ALPHA_RTC_H
|
|
|
|
#if defined(CONFIG_ALPHA_GENERIC)
|
|
# define get_rtc_time alpha_mv.rtc_get_time
|
|
# define set_rtc_time alpha_mv.rtc_set_time
|
|
#else
|
|
# if defined(CONFIG_ALPHA_MARVEL) && defined(CONFIG_SMP)
|
|
# define get_rtc_time marvel_get_rtc_time
|
|
# define set_rtc_time marvel_set_rtc_time
|
|
# endif
|
|
#endif
|
|
|
|
#include <asm-generic/rtc.h>
|
|
|
|
#endif
|