- Aug 30, 2023
-
-
Heiko Carstens authored
Slightly improve the description which explains why the first prefix page must be mapped executable when the BEAR-enhancement facility is not installed. Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
For consistencs reasons change the type of __samode31, __eamode31, __stext_amode31, and __etext_amode31 to a char pointer so they (nearly) match the type of all other sections. This allows for code simplifications with follow-on patches. Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The kernel mapping is setup in two stages: in the decompressor map all pages with RWX permissions, and within the kernel change all mappings to their final permissions, where most of the mappings are changed from RWX to RWNX. Change this and map all pages RWNX from the beginning, however without enabling noexec via control register modification. This means that effectively all pages are used with RWX permissions like before. When the final permissions have been applied to the kernel mapping enable noexec via control register modification. This allows to remove quite a bit of non-obvious code. Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
Fix virtual vs physical address confusion (which currently are the same). Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Aug 21, 2023
-
-
Suren Baghdasaryan authored
walk_page_range() and friends often operate under write-locked mmap_lock. With introduction of vma locks, the vmas have to be locked as well during such walks to prevent concurrent page faults in these areas. Add an additional member to mm_walk_ops to indicate locking requirements for the walk. The change ensures that page walks which prevent concurrent page faults by write-locking mmap_lock, operate correctly after introduction of per-vma locks. With per-vma locks page faults can be handled under vma lock without taking mmap_lock at all, so write locking mmap_lock would not stop them. The change ensures vmas are properly locked during such walks. A sample issue this solves is do_mbind() performing queue_pages_range() to queue pages for migration. Without this change a concurrent page can be faulted into the area and be left out of migration. Link: https://lkml.kernel.org/r/20230804152724.3090321-2-surenb@google.com Signed-off-by:
Suren Baghdasaryan <surenb@google.com> Suggested-by:
Linus Torvalds <torvalds@linuxfoundation.org> Suggested-by:
Jann Horn <jannh@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Michel Lespinasse <michel@lespinasse.org> Cc: Peter Xu <peterx@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Aug 16, 2023
-
-
Linus Walleij authored
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. For symmetry do the same with pfn_to_virt() reflecting the current layout in asm-generic/page.h. Doing this reveals a number of offenders in the arch code and the S390-specific drivers, so just bite the bullet and fix up all of those as well. Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Link: https://lore.kernel.org/r/20230812-virt-to-phys-s390-v2-1-6c40f31fe36f@linaro.org Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
Make Real Memory Copy area size and mask explicit. This does not bring any functional change and only needed for clarity. Acked-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Jul 29, 2023
-
-
Heiko Carstens authored
Use consistent comment style within the whole pfault C code. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Cleanup the pfault inline assemblies: - Use symbolic names for operands - Add extra linebreaks, and whitespace to improve readability In addition, change __pfault_init() to return -EOPNOTSUPP in case of an exception, and don't return a made up valid diag 258 return value (aka "8"). This allows to simplify the inline assembly, and makes debugging easier, in case something is broken. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
early_param() is the standard way of defining early kernel command line parameters. Use that instead of the old __setup() variant. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
struct pfault_refbk is naturally packed and aligned; remove not needed packed and aligned attributes. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Remove another leftover of the 31 bit area: replace the not needed "unsigned long long" suffix with "unsigned long", and stay consistent with the rest of the code. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The pfault code has nothing to do with regular fault handling. Therefore move it to an own C file. Also add an own pfault header file. This way changes to setup.h don't cause a recompile of the pfault code and vice versa. Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Jul 27, 2023
-
-
Sven Schnelle authored
Since commit bb1520d5 ("s390/mm: start kernel with DAT enabled") the kernel crashes early during boot when debug pagealloc is enabled: mem auto-init: stack:off, heap alloc:off, heap free:off addressing exception: 0005 ilc:2 [#1] SMP DEBUG_PAGEALLOC Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 6.5.0-rc3-09759-gc5666c912155 #630 [..] Krnl Code: 00000000001325f6: ec5600248064 cgrj %r5,%r6,8,000000000013263e 00000000001325fc: eb880002000c srlg %r8,%r8,2 #0000000000132602: b2210051 ipte %r5,%r1,%r0,0 >0000000000132606: b90400d1 lgr %r13,%r1 000000000013260a: 41605008 la %r6,8(%r5) 000000000013260e: a7db1000 aghi %r13,4096 0000000000132612: b221006d ipte %r6,%r13,%r0,0 0000000000132616: e3d0d0000171 lay %r13,4096(%r13) Call Trace: __kernel_map_pages+0x14e/0x320 __free_pages_ok+0x23a/0x5a8) free_low_memory_core_early+0x214/0x2c8 memblock_free_all+0x28/0x58 mem_init+0xb6/0x228 mm_core_init+0xb6/0x3b0 start_kernel+0x1d2/0x5a8 startup_continue+0x36/0x40 Kernel panic - not syncing: Fatal exception: panic_on_oops This is caused by using large mappings on machines with EDAT1/EDAT2. Add the code to split the mappings into 4k pages if debug pagealloc is enabled by CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT or the debug_pagealloc kernel command line option. Fixes: bb1520d5 ("s390/mm: start kernel with DAT enabled") Signed-off-by:
Sven Schnelle <svens@linux.ibm.com> Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Jul 24, 2023
-
-
Alexander Gordeev authored
Interface segment_warning() reports maximum mappable physical address for -ERANGE error. Currently that address is the value of VMEM_MAX_PHYS macro, but that well might change. A better way to obtain that address is calling arch_get_mappable_range() callback - one that is used by vmem_add_mapping() and generates -ERANGE error in the first place. Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
As per description in mm/memory_hotplug.c platforms should define arch_get_mappable_range() that provides maximum possible addressable physical memory range for which the linear mapping could be created. The current implementation uses VMEM_MAX_PHYS macro as the maximum mappable physical address and it is simply a cast to vmemmap. Since the address is in physical address space the natural upper limit of MAX_PHYSMEM_BITS is honoured: vmemmap_start = min(vmemmap_start, 1UL << MAX_PHYSMEM_BITS); Further, to make sure the identity mapping would not overlay with vmemmap, the size of identity mapping could be stripped like this: ident_map_size = min(ident_map_size, vmemmap_start); Similarily, any other memory that could be added (e.g DCSS segment) should not overlay with vmemmap as well and that is prevented by using vmemmap (VMEM_MAX_PHYS macro) as the upper limit. However, while the use of VMEM_MAX_PHYS brings the desired result it actually poses two issues: 1. As described, vmemmap is handled as a physical address, although it is actually a pointer to struct page in virtual address space. 2. As vmemmap is a virtual address it could have been located anywhere in the virtual address space. However, the desired necessity to honour MAX_PHYSMEM_BITS limit prevents that. Rework arch_get_mappable_range() callback in a way it does not use VMEM_MAX_PHYS macro and does not confuse the notion of virtual vs physical address spacees as result. That paves the way for moving vmemmap elsewhere and optimizing the virtual address space layout. Introduce max_mappable preserved boot variable and let function setup_kernel_memory_layout() set it up. As result, the rest of the code is does not need to know the virtual memory layout specifics. Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Jul 18, 2023
-
-
Sven Schnelle authored
With per-vma locks, handle_mm_fault() may return non-fatal error flags. In this case the code should reset the fault flags before returning. Fixes: e06f47a1 ("s390/mm: try VMA lock-based page fault handling first") Signed-off-by:
Sven Schnelle <svens@linux.ibm.com> Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Claudio Imbrenda authored
The index field of the struct page corresponding to a guest ASCE should be 0. When replacing the ASCE in s390_replace_asce(), the index of the new ASCE should also be set to 0. Having the wrong index might lead to the wrong addresses being passed around when notifying pte invalidations, and eventually to validity intercepts (VM crash) if the prefix gets unmapped and the notifier gets called with the wrong address. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Fixes: faa2f72c ("KVM: s390: pv: leak the topmost page table when destroy fails") Reviewed-by:
Janosch Frank <frankja@linux.ibm.com> Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-ID: <20230705111937.33472-3-imbrenda@linux.ibm.com>
-
- Jul 04, 2023
-
-
Alexander Gordeev authored
This reverts commit 456be42a. The assumption VMEM_MAX_PHYS should match ident_map_size is wrong. At least discontiguous saved segments (DCSS) could be loaded at addresses beyond ident_map_size and dcssblk device driver might fail as result. Reported-by:
Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
- Jul 03, 2023
-
-
Heiko Carstens authored
Fix various typos found with codespell. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
Heiko Carstens authored
Include linux/io.h instead of asm/io.h everywhere. linux/io.h includes asm/io.h, so this shouldn't cause any problems. Instead this might help for some randconfig build errors which were reported due to some undefined io related functions. Also move the changed include so it stays grouped together with other includes from the same directory. For ctcm_mpc.c also remove not needed comments (actually questions). Acked-by:
Christian Borntraeger <borntraeger@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
- Jun 28, 2023
-
-
Alexander Gordeev authored
Fix virtual vs physical address confusion (which currently are the same). Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
Alexander Gordeev authored
VMEM_MAX_PHYS is supposed to be the highest physical address that can be added to the identity mapping. It should match ident_map_size, which has the same meaning. However, unlike ident_map_size it is not adjusted against various limiting factors (see the comment to setup_ident_map_size() function). That renders all checks against VMEM_MAX_PHYS invalid. Further, VMEM_MAX_PHYS is currently set to vmemmap, which is an address in virtual memory space. However, it gets compared against physical addresses in various locations. That works, because both address spaces are the same on s390, but otherwise it is wrong. Instead of fixing VMEM_MAX_PHYS misuse and semantics just remove it. Acked-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
- Jun 27, 2023
-
-
Linus Torvalds authored
This finishes the job of always holding the mmap write lock when extending the user stack vma, and removes the 'write_locked' argument from the vm helper functions again. For some cases, we just avoid expanding the stack at all: drivers and page pinning really shouldn't be extending any stacks. Let's see if any strange users really wanted that. It's worth noting that architectures that weren't converted to the new lock_mm_and_find_vma() helper function are left using the legacy "expand_stack()" function, but it has been changed to drop the mmap_lock and take it for writing while expanding the vma. This makes it fairly straightforward to convert the remaining architectures. As a result of dropping and re-taking the lock, the calling conventions for this function have also changed, since the old vma may no longer be valid. So it will now return the new vma if successful, and NULL - and the lock dropped - if the area could not be extended. Tested-by:
Vegard Nossum <vegard.nossum@oracle.com> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> # ia64 Tested-by: Frank Scheiner <frank.scheiner@web.de> # ia64 Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jun 20, 2023
-
-
Alexander Gordeev authored
Since commit 3b5c3f000c2e ("s390/kasan: move shadow mapping to decompressor") the decompressor establishes mappings for the shadow memory and sets initial protection attributes to RWX. The decompressed kernel resets protection to RW+NX later on. In case a shadow memory range is not aligned on page boundary (e.g. as result of mem= kernel command line parameter use), the "Checked W+X mappings: FAILED, 1 W+X pages found" warning hits. Reported-by:
Vasily Gorbik <gor@linux.ibm.com> Fixes: 557b1970 ("s390/kasan: move shadow mapping to decompressor") Reviewed-by:
Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
- Jun 19, 2023
-
-
Hugh Dickins authored
pte_alloc_map_lock() expects to be followed by pte_unmap_unlock(): to keep balance in future, pass ptep as well as ptl to gmap_pte_op_end(), and use pte_unmap_unlock() instead of direct spin_unlock() (even though ptep ends up unused inside the macro). Link: https://lkml.kernel.org/r/78873af-e1ec-4f9-47ac-483940ac6daa@google.com Signed-off-by:
Hugh Dickins <hughd@google.com> Acked-by:
Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John David Anglin <dave.anglin@bell.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Hugh Dickins authored
In rare transient cases, not yet made possible, pte_offset_map() and pte_offset_map_lock() may not find a page table: handle appropriately. Add comment on mm's contract with s390 above __zap_zero_pages(), and fix old comment there: must be called after THP was disabled. Link: https://lkml.kernel.org/r/3ff29363-336a-9733-12a1-5c31a45c8aeb@google.com Signed-off-by:
Hugh Dickins <hughd@google.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John David Anglin <dave.anglin@bell.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- May 17, 2023
-
-
Arnd Bergmann authored
The arch_report_meminfo() function is provided by four architectures, with a __weak fallback in procfs itself. On architectures that don't have a custom version, the __weak version causes a warning because of the missing prototype. Remove the architecture specific prototypes and instead add one in linux/proc_fs.h. Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> # for arch/x86 Acked-by: Helge Deller <deller@gmx.de> # parisc Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Message-Id: <20230516195834.551901-1-arnd@kernel.org> Signed-off-by:
Christian Brauner <brauner@kernel.org>
-
- May 04, 2023
-
-
Claudio Imbrenda authored
On machines without the Destroy Secure Configuration Fast UVC, the topmost level of page tables is set aside and freed asynchronously as last step of the asynchronous teardown. Each gmap has a host_to_guest radix tree mapping host (userspace) addresses (with 1M granularity) to gmap segment table entries (pmds). If a guest is smaller than 2GB, the topmost level of page tables is the segment table (i.e. there are only 2 levels). Replacing it means that the pointers in the host_to_guest mapping would become stale and cause all kinds of nasty issues. This patch fixes the issue by disallowing asynchronous teardown for guests with only 2 levels of page tables. Userspace should (and already does) try using the normal destroy if the asynchronous one fails. Update s390_replace_asce so it refuses to replace segment type ASCEs. This is still needed in case the normal destroy VM fails. Fixes: fb491d55 ("KVM: s390: pv: asynchronous destroy for reboot") Reviewed-by:
Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by:
Janosch Frank <frankja@linux.ibm.com> Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20230421085036.52511-2-imbrenda@linux.ibm.com>
-
- May 03, 2023
-
-
David Hildenbrand authored
Let's factor out actual disabling of KSM. The existing "mm->def_flags &= ~VM_MERGEABLE;" was essentially a NOP and can be dropped, because def_flags should never include VM_MERGEABLE. Note that we don't currently prevent re-enabling KSM. This should now be faster in case KSM was never enabled, because we only conditionally iterate all VMAs. Further, it certainly looks cleaner. Link: https://lkml.kernel.org/r/20230422210156.33630-1-david@redhat.com Signed-off-by:
David Hildenbrand <david@redhat.com> Acked-by:
Janosch Frank <frankja@linux.ibm.com> Acked-by:
Stefan Roesch <shr@devkernel.io> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Apr 21, 2023
-
-
Linus Torvalds authored
Instead of having callers care about the mmap_min_addr logic for the lowest valid mapping address (and some of them getting it wrong), just move the logic into vm_unmapped_area() itself. One less thing for various architecture cases (and generic helpers) to worry about. We should really try to make much more of this be common code, but baby steps.. Without this, vm_unmapped_area() could return an address below mmap_min_addr (because some caller forgot about that). That then causes the mmap machinery to think it has found a workable address, but then later security_mmap_addr(addr) is unhappy about it and the mmap() returns with a nonsensical error (EPERM). The proper action is to either return ENOMEM (if the virtual address space is exhausted), or try to find another address (ie do a bottom-up search for free addresses after the top-down one failed). See commit 2afc745f ("mm: ensure get_unmapped_area() returns higher address than mmap_min_addr"), which fixed this for one call site (the generic arch_get_unmapped_area_topdown() fallback) but left other cases alone. Link: https://lkml.kernel.org/r/20230418214009.1142926-1-Liam.Howlett@oracle.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Liam Howlett <liam.howlett@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Stefan Roesch authored
Patch series "mm: process/cgroup ksm support", v9. So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. Use case 1: The madvise call is not available in the programming language. An example for this are programs with forked workloads using a garbage collected language without pointers. In such a language madvise cannot be made available. In addition the addresses of objects get moved around as they are garbage collected. KSM sharing needs to be enabled "from the outside" for these type of workloads. Use case 2: The same interpreter can also be used for workloads where KSM brings no benefit or even has overhead. We'd like to be able to enable KSM on a workload by workload basis. Use case 3: With the madvise call sharing opportunities are only enabled for the current process: it is a workload-local decision. A considerable number of sharing opportunities may exist across multiple workloads or jobs (if they are part of the same security domain). Only a higler level entity like a job scheduler or container can know for certain if its running one or more instances of a job. That job scheduler however doesn't have the necessary internal workload knowledge to make targeted madvise calls. Security concerns: In previous discussions security concerns have been brought up. The problem is that an individual workload does not have the knowledge about what else is running on a machine. Therefore it has to be very conservative in what memory areas can be shared or not. However, if the system is dedicated to running multiple jobs within the same security domain, its the job scheduler that has the knowledge that sharing can be safely enabled and is even desirable. Performance: Experiments with using UKSM have shown a capacity increase of around 20%. Here are the metrics from an instagram workload (taken from a machine with 64GB main memory): full_scans: 445 general_profit: 20158298048 max_page_sharing: 256 merge_across_nodes: 1 pages_shared: 129547 pages_sharing: 5119146 pages_to_scan: 4000 pages_unshared: 1760924 pages_volatile: 10761341 run: 1 sleep_millisecs: 20 stable_node_chains: 167 stable_node_chains_prune_millisecs: 2000 stable_node_dups: 2751 use_zero_pages: 0 zero_pages_sharing: 0 After the service is running for 30 minutes to an hour, 4 to 5 million shared pages are common for this workload when using KSM. Detailed changes: 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 3. Add general_profit metric The general_profit metric of KSM is specified in the documentation, but not calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm. 4. Add more metrics to ksm_stat This adds the process profit metric to /proc/<pid>/ksm_stat. 5. Add more tests to ksm_tests and ksm_functional_tests This adds an option to specify the merge type to the ksm_tests. This allows to test madvise and prctl KSM. It also adds a two new tests to ksm_functional_tests: one to test the new prctl options and the other one is a fork test to verify that the KSM process setting is inherited by client processes. This patch (of 3): So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 1) Introduce new MMF_VM_MERGE_ANY flag This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is set, kernel samepage merging (ksm) gets enabled for all vma's of a process. 2) Setting VM_MERGEABLE on VMA creation When a VMA is created, if the MMF_VM_MERGE_ANY flag is set, the VM_MERGEABLE flag will be set for this VMA. 3) support disabling of ksm for a process This adds the ability to disable ksm for a process if ksm has been enabled for the process with prctl. 4) add new prctl option to get and set ksm for a process This adds two new options to the prctl system call - enable ksm for all vmas of a process (if the vmas support it). - query if ksm has been enabled for a process. 3. Disabling MMF_VM_MERGE_ANY for storage keys in s390 In the s390 architecture when storage keys are used, the MMF_VM_MERGE_ANY will be disabled. Link: https://lkml.kernel.org/r/20230418051342.1919757-1-shr@devkernel.io Link: https://lkml.kernel.org/r/20230418051342.1919757-2-shr@devkernel.io Signed-off-by:
Stefan Roesch <shr@devkernel.io> Acked-by:
David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Apr 20, 2023
-
-
Heiko Carstens authored
Make use of the set_direct_map() calls for module allocations. In particular: - All changes to read-only permissions in kernel VA mappings are also applied to the direct mapping. Note that execute permissions are intentionally not applied to the direct mapping in order to make sure that all allocated pages within the direct mapping stay non-executable - module_alloc() passes the VM_FLUSH_RESET_PERMS to __vmalloc_node_range() to make sure that all implicit permission changes made to the direct mapping are reset when the allocated vm area is freed again Side effects: the direct mapping will be fragmented depending on how many vm areas with VM_FLUSH_RESET_PERMS and/or explicit page permission changes are allocated and freed again. For example, just after boot of a system the direct mapping statistics look like: $cat /proc/meminfo ... DirectMap4k: 111628 kB DirectMap1M: 16665600 kB DirectMap2G: 0 kB Acked-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Implement the set_direct_map_*() API, which allows to invalidate and set default permissions to pages within the direct mapping. Note that kernel_page_present(), which is also supposed to be part of this API, is intentionally not implemented. The reason for this is that kernel_page_present() is only used (and currently only makes sense) for suspend/resume, which isn't supported on s390. Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
- Apr 13, 2023
-
-
Heiko Carstens authored
Commit bb1520d5 ("s390/mm: start kernel with DAT enabled") did not implement direct map accounting in the early page table setup code. In result the reported values are bogus now: $cat /proc/meminfo ... DirectMap4k: 5120 kB DirectMap1M: 18446744073709546496 kB DirectMap2G: 0 kB Fix this by adding the missing accounting. The result looks sane again: $cat /proc/meminfo ... DirectMap4k: 6156 kB DirectMap1M: 2091008 kB DirectMap2G: 6291456 kB Fixes: bb1520d5 ("s390/mm: start kernel with DAT enabled") Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Given that set_memory_rox() and set_memory_rwnx() exist, it is possible to get rid of all open coded __set_memory() usages and replace them with proper helper calls everywhere. Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
- Apr 06, 2023
-
-
Heiko Carstens authored
Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. This is the s390 variant of "x86/mm: try VMA lock-based page fault handling first". Link: https://lkml.kernel.org/r/20230314132808.1266335-1-hca@linux.ibm.com Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Mar 20, 2023
-
-
Heiko Carstens authored
Make use of atomic_fetch_xor() instead of an atomic_cmpxchg() loop to implement atomic_xor_bits() (aka atomic_xor_return()). This makes the C code more readable and in addition generates better code, since for z196 and newer a single lax instruction is generated instead of a cmpxchg() loop. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Since regular paging structs are initialized in decompressor already move KASAN shadow mapping to decompressor as well. This helps to avoid allocating KASAN required memory in 1 large chunk, de-duplicate paging structs creation code and start the uncompressed kernel with KASAN instrumentation right away. This also allows to avoid all pitfalls accidentally calling KASAN instrumented code during KASAN initialization. Acked-by:
Heiko Carstens <hca@linux.ibm.com> Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Allow changing page table attributes for KASAN shadow memory ranges. Acked-by:
Heiko Carstens <hca@linux.ibm.com> Reviewed-by:
Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-