Skip to content
Snippets Groups Projects
Commit ad6750c1 authored by Linus Walleij's avatar Linus Walleij Committed by Greg Kroah-Hartman
Browse files

ARM: 9429/1: ioremap: Sync PGDs for VMALLOC shadow


commit d6e6a74d4cea853b5321eeabb69c611148eedefe upstream.

When sync:ing the VMALLOC area to other CPUs, make sure to also
sync the KASAN shadow memory for the VMALLOC area, so that we
don't get stale entries for the shadow memory in the top level PGD.

Since we are now copying PGDs in two instances, create a helper
function named memcpy_pgd() to do the actual copying, and
create a helper to map the addresses of VMALLOC_START and
VMALLOC_END into the corresponding shadow memory.

Co-developed-by: default avatarMelon Liu <melon1335@163.com>

Cc: stable@vger.kernel.org
Fixes: 565cbaad ("ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC")
Link: https://lore.kernel.org/linux-arm-kernel/a1a1d062-f3a2-4d05-9836-3b098de9db6d@foss.st.com/


Reported-by: default avatarClement LE GOFFIC <clement.legoffic@foss.st.com>
Suggested-by: default avatarMark Rutland <mark.rutland@arm.com>
Suggested-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 43cc7939
No related branches found
No related tags found
1 merge request!176🤖 Sync Bot: Update v6.12-ktn to Latest Stable Kernel (v6.12.4)
......@@ -23,6 +23,7 @@
*/
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/kasan.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/io.h>
......@@ -115,16 +116,40 @@ int ioremap_page(unsigned long virt, unsigned long phys,
}
EXPORT_SYMBOL(ioremap_page);
#ifdef CONFIG_KASAN
static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
{
return (unsigned long)kasan_mem_to_shadow((void *)addr);
}
#else
static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
{
return 0;
}
#endif
static void memcpy_pgd(struct mm_struct *mm, unsigned long start,
unsigned long end)
{
end = ALIGN(end, PGDIR_SIZE);
memcpy(pgd_offset(mm, start), pgd_offset_k(start),
sizeof(pgd_t) * (pgd_index(end) - pgd_index(start)));
}
void __check_vmalloc_seq(struct mm_struct *mm)
{
int seq;
do {
seq = atomic_read(&init_mm.context.vmalloc_seq);
memcpy(pgd_offset(mm, VMALLOC_START),
pgd_offset_k(VMALLOC_START),
sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
pgd_index(VMALLOC_START)));
memcpy_pgd(mm, VMALLOC_START, VMALLOC_END);
if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
unsigned long start =
arm_kasan_mem_to_shadow(VMALLOC_START);
unsigned long end =
arm_kasan_mem_to_shadow(VMALLOC_END);
memcpy_pgd(mm, start, end);
}
/*
* Use a store-release so that other CPUs that observe the
* counter's new value are guaranteed to see the results of the
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment