r205751 - intrin.h: Implement __readmsr, __readcr3, and __writecr3
PaX Team
pageexec at freemail.hu
Tue Apr 8 03:59:58 PDT 2014
On 8 Apr 2014 at 0:28, Reid Kleckner wrote:
> +static __inline__ unsigned __int64 __attribute__((__always_inline__, __nodebug__))
> +__readmsr(unsigned long __register) {
> + // Loads the contents of a 64-bit model specific register (MSR) specified in
> + // the ECX register into registers EDX:EAX. The EDX register is loaded with
> + // the high-order 32 bits of the MSR and the EAX register is loaded with the
> + // low-order 32 bits. If less than 64 bits are implemented in the MSR being
> + // read, the values returned to EDX:EAX in unimplemented bit locations are
> + // undefined.
> + unsigned long __edx;
> + unsigned long __eax;
> + __asm__ ("rdmsr"
> + : "=d"(__edx), "=a"(__eax)
> + : "c"(__register)
> + : "%ecx", "%edx", "%eax");
> + return (((unsigned __int64)__edx) << 32) | (unsigned __int64)__eax;
> +}
i don't think this is correct, input/output registers should not appear on
the clobbered list. gcc itself doesn't accept this code and complains with:
error: 'asm' operand has impossible constraints
> +static __inline__ unsigned long __attribute__((always_inline, __nodebug__))
> +__readcr3(void) {
> + unsigned long value;
> + __asm__ __volatile__("mov %%cr3, %0" : "=q"(value));
> + return value;
> +}
note that asm volatile won't prevent reordering per se and the solution
linux uses is that these inline asm stmts have a fake dependency (read
or write) on a variable used just for this purpose. the alternative would
be to use a memory clobber as you did for __writecr3 but that's quite
heavyweight as it prevents reorderig all other loads/stores as well which
isn't often necessary (e.g., on a context switch the kernel doesn't care
how loads/stores get reordered around a cr3 reload since only the userland
part of the address space changes).
cheers,
PaX Team
More information about the cfe-commits
mailing list