Skip to content
Snippets Groups Projects
  1. Sep 11, 2023
    • Sergey Senozhatsky's avatar
      zsmalloc: allow only one active pool compaction context · 952b9aab
      Sergey Senozhatsky authored and Frieder Schrempf's avatar Frieder Schrempf committed
      commit d2658f20 upstream.
      
      zsmalloc pool can be compacted concurrently by many contexts,
      e.g.
      
       cc1 handle_mm_fault()
            do_anonymous_page()
             __alloc_pages_slowpath()
              try_to_free_pages()
               do_try_to_free_pages(
                lru_gen_shrink_node()
                 shrink_slab()
                  do_shrink_slab()
                   zs_shrinker_scan()
                    zs_compact()
      
      Pool compaction is currently (basically) single-threaded as
      it is performed under pool->lock. Having multiple compaction
      threads results in unnecessary contention, as each thread
      competes for pool->lock. This, in turn, affects all zsmalloc
      operations such as zs_malloc(), zs_map_object(), zs_free(), etc.
      
      Introduce the pool->compaction_in_progress atomic variable,
      which ensures that only one compaction context can run at a
      time. This reduces overall pool->lock contention in (corner)
      cases when many contexts attempt to shrink zspool simultaneously.
      
      Link: https://lkml.kernel.org/r/20230418074639.1903197-1-senozhatsky@chromium.org
      
      
      Fixes: c0547d0b ("zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks")
      Signed-off-by: default avatarSergey Senozhatsky <senozhatsky@chromium.org>
      Reviewed-by: default avatarYosry Ahmed <yosryahmed@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      952b9aab
    • Andrew Yang's avatar
      zsmalloc: fix races between modifications of fullness and isolated · cdc44400
      Andrew Yang authored and Frieder Schrempf's avatar Frieder Schrempf committed
      [ Upstream commit 4b5d1e47 ]
      
      We encountered many kernel exceptions of VM_BUG_ON(zspage->isolated ==
      0) in dec_zspage_isolation() and BUG_ON(!pages[1]) in zs_unmap_object()
      lately.  This issue only occurs when migration and reclamation occur at
      the same time.
      
      With our memory stress test, we can reproduce this issue several times
      a day.  We have no idea why no one else encountered this issue.  BTW,
      we switched to the new kernel version with this defect a few months
      ago.
      
      Since fullness and isolated share the same unsigned int, modifications of
      them should be protected by the same lock.
      
      [andrew.yang@mediatek.com: move comment]
        Link: https://lkml.kernel.org/r/20230727062910.6337-1-andrew.yang@mediatek.com
      Link: https://lkml.kernel.org/r/20230721063705.11455-1-andrew.yang@mediatek.com
      
      
      Fixes: c4549b87 ("zsmalloc: remove zspage isolation for migration")
      Signed-off-by: default avatarAndrew Yang <andrew.yang@mediatek.com>
      Reviewed-by: default avatarSergey Senozhatsky <senozhatsky@chromium.org>
      Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
      Cc: Matthias Brugger <matthias.bgg@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      cdc44400
    • Nhat Pham's avatar
      zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks · 96ba61bf
      Nhat Pham authored and Frieder Schrempf's avatar Frieder Schrempf committed
      [ Upstream commit c0547d0b ]
      
      Currently, zsmalloc has a hierarchy of locks, which includes a pool-level
      migrate_lock, and a lock for each size class.  We have to obtain both
      locks in the hotpath in most cases anyway, except for zs_malloc.  This
      exception will no longer exist when we introduce a LRU into the zs_pool
      for the new writeback functionality - we will need to obtain a pool-level
      lock to synchronize LRU handling even in zs_malloc.
      
      In preparation for zsmalloc writeback, consolidate these locks into a
      single pool-level lock, which drastically reduces the complexity of
      synchronization in zsmalloc.
      
      We have also benchmarked the lock consolidation to see the performance
      effect of this change on zram.
      
      First, we ran a synthetic FS workload on a server machine with 36 cores
      (same machine for all runs), using
      
      fs_mark  -d  ../zram1mnt  -s  100000  -n  2500  -t  32  -k
      
      before and after for btrfs and ext4 on zram (FS usage is 80%).
      
      Here is the result (unit is file/second):
      
      With lock consolidation (btrfs):
      Average: 13520.2, Median: 13531.0, Stddev: 137.5961482019028
      
      Without lock consolidation (btrfs):
      Average: 13487.2, Median: 13575.0, Stddev: 309.08283679298665
      
      With lock consolidation (ext4):
      Average: 16824.4, Median: 16839.0, Stddev: 89.97388510006668
      
      Without lock consolidation (ext4)
      Average: 16958.0, Median: 16986.0, Stddev: 194.7370021336469
      
      As you can see, we observe a 0.3% regression for btrfs, and a 0.9%
      regression for ext4. This is a small, barely measurable difference in my
      opinion.
      
      For a more realistic scenario, we also tries building the kernel on zram.
      Here is the time it takes (in seconds):
      
      With lock consolidation (btrfs):
      real
      Average: 319.6, Median: 320.0, Stddev: 0.8944271909999159
      user
      Average: 6894.2, Median: 6895.0, Stddev: 25.528415540334656
      sys
      Average: 521.4, Median: 522.0, Stddev: 1.51657508881031
      
      Without lock consolidation (btrfs):
      real
      Average: 319.8, Median: 320.0, Stddev: 0.8366600265340756
      user
      Average: 6896.6, Median: 6899.0, Stddev: 16.04057355583023
      sys
      Average: 520.6, Median: 521.0, Stddev: 1.140175425099138
      
      With lock consolidation (ext4):
      real
      Average: 320.0, Median: 319.0, Stddev: 1.4142135623730951
      user
      Average: 6896.8, Median: 6878.0, Stddev: 28.621670111997307
      sys
      Average: 521.2, Median: 521.0, Stddev: 1.7888543819998317
      
      Without lock consolidation (ext4)
      real
      Average: 319.6, Median: 319.0, Stddev: 0.8944271909999159
      user
      Average: 6886.2, Median: 6887.0, Stddev: 16.93221781102523
      sys
      Average: 520.4, Median: 520.0, Stddev: 1.140175425099138
      
      The difference is entirely within the noise of a typical run on zram.
      This hardly justifies the complexity of maintaining both the pool lock and
      the class lock.  In fact, for writeback, we would need to introduce yet
      another lock to prevent data races on the pool's LRU, further complicating
      the lock handling logic.  IMHO, it is just better to collapse all of these
      into a single pool-level lock.
      
      Link: https://lkml.kernel.org/r/20221128191616.1261026-4-nphamcs@gmail.com
      
      
      Signed-off-by: default avatarNhat Pham <nphamcs@gmail.com>
      Suggested-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarSergey Senozhatsky <senozhatsky@chromium.org>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Vitaly Wool <vitaly.wool@konsulko.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Stable-dep-of: 4b5d1e47 ("zsmalloc: fix races between modifications of fullness and isolated")
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      96ba61bf
  2. Oct 21, 2022
  3. Oct 03, 2022
  4. Sep 12, 2022
  5. Aug 28, 2022
  6. Aug 02, 2022
  7. Jul 30, 2022
    • Hui Zhu's avatar
      zsmalloc: zs_malloc: return ERR_PTR on failure · c7e6f17b
      Hui Zhu authored
      zs_malloc returns 0 if it fails.  zs_zpool_malloc will return -1 when
      zs_malloc return 0.  But -1 makes the return value unclear.
      
      For example, when zswap_frontswap_store calls zs_malloc through
      zs_zpool_malloc, it will return -1 to its caller.  The other return value
      is -EINVAL, -ENODEV or something else.
      
      This commit changes zs_malloc to return ERR_PTR on failure.  It didn't
      just let zs_zpool_malloc return -ENOMEM becaue zs_malloc has two types of
      failure:
      
      - size is not OK return -EINVAL
      - memory alloc fail return -ENOMEM.
      
      Link: https://lkml.kernel.org/r/20220714080757.12161-1-teawater@gmail.com
      
      
      Signed-off-by: default avatarHui Zhu <teawater@antgroup.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c7e6f17b
  8. Jul 04, 2022
    • Roman Gushchin's avatar
      mm: shrinkers: provide shrinkers with names · e33c267a
      Roman Gushchin authored
      Currently shrinkers are anonymous objects.  For debugging purposes they
      can be identified by count/scan function names, but it's not always
      useful: e.g.  for superblock's shrinkers it's nice to have at least an
      idea of to which superblock the shrinker belongs.
      
      This commit adds names to shrinkers.  register_shrinker() and
      prealloc_shrinker() functions are extended to take a format and arguments
      to master a name.
      
      In some cases it's not possible to determine a good name at the time when
      a shrinker is allocated.  For such cases shrinker_debugfs_rename() is
      provided.
      
      The expected format is:
          <subsystem>-<shrinker_type>[:<instance>]-<id>
      For some shrinkers an instance can be encoded as (MAJOR:MINOR) pair.
      
      After this change the shrinker debugfs directory looks like:
        $ cd /sys/kernel/debug/shrinker/
        $ ls
          dquota-cache-16     sb-devpts-28     sb-proc-47       sb-tmpfs-42
          mm-shadow-18        sb-devtmpfs-5    sb-proc-48       sb-tmpfs-43
          mm-zspool:zram0-34  sb-hugetlbfs-17  sb-pstore-31     sb-tmpfs-44
          rcu-kfree-0         sb-hugetlbfs-33  sb-rootfs-2      sb-tmpfs-49
          sb-aio-20           sb-iomem-12      sb-securityfs-6  sb-tracefs-13
          sb-anon_inodefs-15  sb-mqueue-21     sb-selinuxfs-22  sb-xfs:vda1-36
          sb-bdev-3           sb-nsfs-4        sb-sockfs-8      sb-zsmalloc-19
          sb-bpf-32           sb-pipefs-14     sb-sysfs-26      thp-deferred_split-10
          sb-btrfs:vda2-24    sb-proc-25       sb-tmpfs-1       thp-zero-9
          sb-cgroup2-30       sb-proc-39       sb-tmpfs-27      xfs-buf:vda1-37
          sb-configfs-23      sb-proc-41       sb-tmpfs-29      xfs-inodegc:vda1-38
          sb-dax-11           sb-proc-45       sb-tmpfs-35
          sb-debugfs-7        sb-proc-46       sb-tmpfs-40
      
      [roman.gushchin@linux.dev: fix build warnings]
        Link: https://lkml.kernel.org/r/Yr+ZTnLb9lJk6fJO@castle
      
      
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Link: https://lkml.kernel.org/r/20220601032227.4076670-4-roman.gushchin@linux.dev
      
      
      Signed-off-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Hillf Danton <hdanton@sina.com>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      e33c267a
  9. May 13, 2022
    • Sultan Alsawaf's avatar
      zsmalloc: fix races between asynchronous zspage free and page migration · 2505a981
      Sultan Alsawaf authored
      The asynchronous zspage free worker tries to lock a zspage's entire page
      list without defending against page migration.  Since pages which haven't
      yet been locked can concurrently migrate off the zspage page list while
      lock_zspage() churns away, lock_zspage() can suffer from a few different
      lethal races.
      
      It can lock a page which no longer belongs to the zspage and unsafely
      dereference page_private(), it can unsafely dereference a torn pointer to
      the next page (since there's a data race), and it can observe a spurious
      NULL pointer to the next page and thus not lock all of the zspage's pages
      (since a single page migration will reconstruct the entire page list, and
      create_page_chain() unconditionally zeroes out each list pointer in the
      process).
      
      Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
      with page migration.
      
      Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com
      
      
      Fixes: 77ff4657 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse")
      Signed-off-by: default avatarSultan Alsawaf <sultan@kerneltoast.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      2505a981
  10. Jan 22, 2022
  11. Jan 06, 2022
  12. Nov 06, 2021
  13. Jul 01, 2021
  14. May 07, 2021
  15. May 05, 2021
  16. Feb 26, 2021
  17. Dec 15, 2020
  18. Dec 06, 2020
    • Minchan Kim's avatar
      mm/zsmalloc.c: drop ZSMALLOC_PGTABLE_MAPPING · e91d8d78
      Minchan Kim authored
      While I was doing zram testing, I found sometimes decompression failed
      since the compression buffer was corrupted.  With investigation, I found
      below commit calls cond_resched unconditionally so it could make a
      problem in atomic context if the task is reschedule.
      
        BUG: sleeping function called from invalid context at mm/vmalloc.c:108
        in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 946, name: memhog
        3 locks held by memhog/946:
         #0: ffff9d01d4b193e8 (&mm->mmap_lock#2){++++}-{4:4}, at: __mm_populate+0x103/0x160
         #1: ffffffffa3d53de0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0xa98/0x1160
         #2: ffff9d01d56b8110 (&zspage->lock){.+.+}-{3:3}, at: zs_map_object+0x8e/0x1f0
        CPU: 0 PID: 946 Comm: memhog Not tainted 5.9.3-00011-gc5bfc0287345-dirty #316
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1 04/01/2014
        Call Trace:
          unmap_kernel_range_noflush+0x2eb/0x350
          unmap_kernel_range+0x14/0x30
          zs_unmap_object+0xd5/0xe0
          zram_bvec_rw.isra.0+0x38c/0x8e0
          zram_rw_page+0x90/0x101
          bdev_write_page+0x92/0xe0
          __swap_writepage+0x94/0x4a0
          pageout+0xe3/0x3a0
          shrink_page_list+0xb94/0xd60
          shrink_inactive_list+0x158/0x460
      
      We can fix this by removing the ZSMALLOC_PGTABLE_MAPPING feature (which
      contains the offending calling code) from zsmalloc.
      
      Even though this option showed some amount improvement(e.g., 30%) in
      some arm32 platforms, it has been headache to maintain since it have
      abused APIs[1](e.g., unmap_kernel_range in atomic context).
      
      Since we are approaching to deprecate 32bit machines and already made
      the config option available for only builtin build since v5.8, lastly it
      has been not default option in zsmalloc, it's time to drop the option
      for better maintenance.
      
      [1] http://lore.kernel.org/linux-mm/20201105170249.387069-1-minchan@kernel.org
      
      
      
      Fixes: e47110e9 ("mm/vunmap: add cond_resched() in vunmap_pmd_range")
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Harish Sriram <harish@linux.ibm.com>
      Cc: Uladzislau Rezki <urezki@gmail.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20201117202916.GA3856507@google.com
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e91d8d78
  19. Oct 18, 2020
  20. Aug 12, 2020
  21. Jun 09, 2020
    • Mike Rapoport's avatar
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport authored
      
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      65fddcfc
    • Mike Rapoport's avatar
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport authored
      
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  22. Jun 02, 2020
    • Christoph Hellwig's avatar
      mm: remove map_vm_range · ed1f324c
      Christoph Hellwig authored
      
      Switch all callers to map_kernel_range, which symmetric to the unmap side
      (as well as the _noflush versions).
      
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Gao Xiang <xiang@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: http://lkml.kernel.org/r/20200414131348.444715-17-hch@lst.de
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ed1f324c
    • Christoph Hellwig's avatar
      mm: rename CONFIG_PGTABLE_MAPPING to CONFIG_ZSMALLOC_PGTABLE_MAPPING · 8b136018
      Christoph Hellwig authored
      
      Rename the Kconfig variable to clarify the scope.
      
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Gao Xiang <xiang@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: http://lkml.kernel.org/r/20200414131348.444715-11-hch@lst.de
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8b136018
  23. Apr 07, 2020
Loading