Skip to content
Snippets Groups Projects
  1. Sep 04, 2024
  2. Sep 02, 2024
  3. Aug 06, 2024
  4. Aug 05, 2024
  5. Aug 02, 2024
  6. Jul 30, 2024
  7. Jul 29, 2024
  8. Jul 15, 2024
  9. Jul 04, 2024
  10. Jul 03, 2024
    • Vlastimil Babka's avatar
      slab, rust: extend kmalloc() alignment guarantees to remove Rust padding · ad59baa3
      Vlastimil Babka authored
      
      Slab allocators have been guaranteeing natural alignment for
      power-of-two sizes since commit 59bb4798 ("mm, sl[aou]b: guarantee
      natural alignment for kmalloc(power-of-two)"), while any other sizes are
      guaranteed to be aligned only to ARCH_KMALLOC_MINALIGN bytes (although
      in practice are aligned more than that in non-debug scenarios).
      
      Rust's allocator API specifies size and alignment per allocation, which
      have to satisfy the following rules, per Alice Ryhl [1]:
      
        1. The alignment is a power of two.
        2. The size is non-zero.
        3. When you round up the size to the next multiple of the alignment,
           then it must not overflow the signed type isize / ssize_t.
      
      In order to map this to kmalloc()'s guarantees, some requested
      allocation sizes have to be padded to the next power-of-two size [2].
      For example, an allocation of size 96 and alignment of 32 will be padded
      to an allocation of size 128, because the existing kmalloc-96 bucket
      doesn't guarantee alignent above ARCH_KMALLOC_MINALIGN. Without slab
      debugging active, the layout of the kmalloc-96 slabs however naturally
      align the objects to 32 bytes, so extending the size to 128 bytes is
      wasteful.
      
      To improve the situation we can extend the kmalloc() alignment
      guarantees in a way that
      
      1) doesn't change the current slab layout (and thus does not increase
         internal fragmentation) when slab debugging is not active
      2) reduces waste in the Rust allocator use case
      3) is a superset of the current guarantee for power-of-two sizes.
      
      The extended guarantee is that alignment is at least the largest
      power-of-two divisor of the requested size. For power-of-two sizes the
      largest divisor is the size itself, but let's keep this case documented
      separately for clarity.
      
      For current kmalloc size buckets, it means kmalloc-96 will guarantee
      alignment of 32 bytes and kmalloc-196 will guarantee 64 bytes.
      
      This covers the rules 1 and 2 above of Rust's API as long as the size is
      a multiple of the alignment. The Rust layer should now only need to
      round up the size to the next multiple if it isn't, while enforcing the
      rule 3.
      
      Implementation-wise, this changes the alignment calculation in
      create_boot_cache(). While at it also do the calulation only for caches
      with the SLAB_KMALLOC flag, because the function is also used to create
      the initial kmem_cache and kmem_cache_node caches, where no alignment
      guarantee is necessary.
      
      In the Rust allocator's krealloc_aligned(), remove the code that padded
      sizes to the next power of two (suggested by Alice Ryhl) as it's no
      longer necessary with the new guarantees.
      
      Reported-by: default avatarAlice Ryhl <aliceryhl@google.com>
      Reported-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Link: https://lore.kernel.org/all/CAH5fLggjrbdUuT-H-5vbQfMazjRDpp2%2Bk3%3DYhPyS17ezEqxwcw@mail.gmail.com/ [1]
      Link: https://lore.kernel.org/all/CAH5fLghsZRemYUwVvhk77o6y1foqnCeDzW4WZv6ScEWna2+_jw@mail.gmail.com/
      
       [2]
      Reviewed-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Acked-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Reviewed-by: default avatarAlice Ryhl <aliceryhl@google.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      ad59baa3
  11. Jun 26, 2024
  12. May 27, 2024
  13. May 19, 2024
    • Samuel Holland's avatar
      arch: add ARCH_HAS_KERNEL_FPU_SUPPORT · 6cbd1d6d
      Samuel Holland authored
      Several architectures provide an API to enable the FPU and run
      floating-point SIMD code in kernel space.  However, the function names,
      header locations, and semantics are inconsistent across architectures, and
      FPU support may be gated behind other Kconfig options.
      
      provide a standard way for architectures to declare that kernel space
      FPU support is available. Architectures selecting this option must
      implement what is currently the most common API (kernel_fpu_begin() and
      kernel_fpu_end(), plus a new function kernel_fpu_available()) and
      provide the appropriate CFLAGS for compiling floating-point C code.
      
      Link: https://lkml.kernel.org/r/20240329072441.591471-2-samuel.holland@sifive.com
      
      
      Signed-off-by: default avatarSamuel Holland <samuel.holland@sifive.com>
      Suggested-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Borislav Petkov (AMD) <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nicolas Schier <nicolas@fjasle.eu>
      Cc: Palmer Dabbelt <palmer@rivosinc.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: WANG Xuerui <git@xen0n.name>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6cbd1d6d
  14. May 07, 2024
  15. May 02, 2024
  16. Apr 03, 2024
  17. Apr 02, 2024
    • Frank Li's avatar
      docs: dma: correct dma_set_mask() sample code · f7ae20f2
      Frank Li authored
      
      There are bunch of codes in driver like
      
             if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
                     dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))
      
      Actually it is wrong because if dma_set_mask_and_coherent(64) fails,
      dma_set_mask_and_coherent(32) will fail for the same reason.
      
      And dma_set_mask_and_coherent(64) never returns failure.
      
      According to the definition of dma_set_mask(), it indicates the width of
      address that device DMA can access. If it can access 64-bit address, it
      must access 32-bit address inherently. So only need set biggest address
      width.
      
      See below code fragment:
      
      dma_set_mask(mask)
      {
      	mask = (dma_addr_t)mask;
      
      	if (!dev->dma_mask || !dma_supported(dev, mask))
      		return -EIO;
      
      	arch_dma_set_mask(dev, mask);
      	*dev->dma_mask = mask;
      	return 0;
      }
      
      dma_supported() will call dma_direct_supported or iommux's dma_supported
      call back function.
      
      int dma_direct_supported(struct device *dev, u64 mask)
      {
      	u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
      
      	/*
      	 * Because 32-bit DMA masks are so common we expect every architecture
      	 * to be able to satisfy them - either by not supporting more physical
      	 * memory, or by providing a ZONE_DMA32.  If neither is the case, the
      	 * architecture needs to use an IOMMU instead of the direct mapping.
      	 */
      	if (mask >= DMA_BIT_MASK(32))
      		return 1;
      
      	...
      }
      
      The iommux's dma_supported() actually means iommu requires devices's
      minimized dma capability.
      
      An example:
      
      static int sba_dma_supported( struct device *dev, u64 mask)()
      {
      	...
      	 * check if mask is >= than the current max IO Virt Address
               * The max IO Virt address will *always* < 30 bits.
               */
              return((int)(mask >= (ioc->ibase - 1 +
                              (ioc->pdir_size / sizeof(u64) * IOVP_SIZE) )));
      	...
      }
      
      1 means supported. 0 means unsupported.
      
      Correct document to make it more clear and provide correct sample code.
      
      Signed-off-by: default avatarFrank Li <Frank.Li@nxp.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJonathan Corbet <corbet@lwn.net>
      [jc: fixed then/than typo]
      Link: https://lore.kernel.org/r/20240401174159.642998-1-Frank.Li@nxp.com
      f7ae20f2
  18. Feb 06, 2024
    • Tejun Heo's avatar
      workqueue: Don't implicitly make UNBOUND workqueues w/ @max_active==1 ordered · 3bc1e711
      Tejun Heo authored
      
      5c0338c6 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered")
      automoatically promoted UNBOUND workqueues w/ @max_active==1 to ordered
      workqueues because UNBOUND workqueues w/ @max_active==1 used to be the way
      to create ordered workqueues and the new NUMA support broke it. These
      problems can be subtle and the fact that they can only trigger on NUMA
      machines made them even more difficult to debug.
      
      However, overloading the UNBOUND allocation interface this way creates other
      issues. It's difficult to tell whether a given workqueue actually needs to
      be ordered and users that legitimately want a min concurrency level wq
      unexpectedly gets an ordered one instead. With planned UNBOUND workqueue
      udpates to improve execution locality and more prevalence of chiplet designs
      which can benefit from such improvements, this isn't a state we wanna be in
      forever.
      
      There aren't that many UNBOUND w/ @max_active==1 users in the tree and the
      preceding patches audited all and converted them to
      alloc_ordered_workqueue() as appropriate. This patch removes the implicit
      promotion of UNBOUND w/ @max_active==1 workqueues to ordered ones.
      
      v2: v1 patch incorrectly dropped !list_empty(&wq->pwqs) condition in
          apply_workqueue_attrs_locked() which spuriously triggers WARNING and
          fails workqueue creation. Fix it.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
      Link: https://lore.kernel.org/oe-lkp/202304251050.45a5df1f-oliver.sang@intel.com
      3bc1e711
  19. Feb 04, 2024
    • Tejun Heo's avatar
      workqueue: Implement BH workqueues to eventually replace tasklets · 4cb1ef64
      Tejun Heo authored
      
      The only generic interface to execute asynchronously in the BH context is
      tasklet; however, it's marked deprecated and has some design flaws such as
      the execution code accessing the tasklet item after the execution is
      complete which can lead to subtle use-after-free in certain usage scenarios
      and less-developed flush and cancel mechanisms.
      
      This patch implements BH workqueues which share the same semantics and
      features of regular workqueues but execute their work items in the softirq
      context. As there is always only one BH execution context per CPU, none of
      the concurrency management mechanisms applies and a BH workqueue can be
      thought of as a convenience wrapper around softirq.
      
      Except for the inability to sleep while executing and lack of max_active
      adjustments, BH workqueues and work items should behave the same as regular
      workqueues and work items.
      
      Currently, the execution is hooked to tasklet[_hi]. However, the goal is to
      convert all tasklet users over to BH workqueues. Once the conversion is
      complete, tasklet can be removed and BH workqueues can directly take over
      the tasklet softirqs.
      
      system_bh[_highpri]_wq are added. As queue-wide flushing doesn't exist in
      tasklet, all existing tasklet users should be able to use the system BH
      workqueues without creating their own workqueues.
      
      v3: - Add missing interrupt.h include.
      
      v2: - Instead of using tasklets, hook directly into its softirq action
            functions - tasklet[_hi]_action(). This is slightly cheaper and closer
            to the eventual code structure we want to arrive at. Suggested by Lai.
      
          - Lai also pointed out several places which need NULL worker->task
            handling or can use clarification. Updated.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/CAHk-=wjDW53w4-YcSmgKC5RruiRLHmJ1sXeYdp_ZgVoBw=5byA@mail.gmail.com
      
      
      Tested-by: default avatarAllen Pais <allen.lkml@gmail.com>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      4cb1ef64
  20. Jan 11, 2024
  21. Jan 03, 2024
  22. Dec 11, 2023
  23. Dec 05, 2023
  24. Nov 29, 2023
    • Thomas Hellström's avatar
      Documentation/gpu: VM_BIND locking document · dad19630
      Thomas Hellström authored
      
      Add the first version of the VM_BIND locking document which is
      intended to be part of the xe driver upstreaming agreement.
      
      The document describes and discuss the locking used during exec-
      functions, evicton and for userptr gpu-vmas. Intention is to be using the
      same nomenclature as the drm-vm-bind-async.rst.
      
      v2:
      - s/gvm/gpu_vm/g (Rodrigo Vivi)
      - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
        (Rodrigo Vivi)
      - Adjust commit message accordingly.
      - Add SPDX license header.
      
      v3:
      - Large update to align with the drm_gpuvm manager locking
      - Add "Efficient userptr gpu_vma exec function iteration" section
      - Add "Locking at bind- and unbind time" section.
      
      v4:
      - Fix tabs vs space errors by untabifying (Rodrigo Vivi)
      - Minor style fixes and typos (Rodrigo Vivi)
      - Clarify situations where stale GPU mappings are occurring and how
        access through these mappings are blocked. (Rodrigo Vivi)
      - Insert into the toctree in implementation_guidelines.rst
      
      v5:
      - Add a section about recoverable page-faults.
      - Use local references to other documentation where possible
        (Bagas Sanjaya)
      - General documentation fixes and typos (Danilo Krummrich and
        Boris Brezillon)
      - Improve the documentation around locks that need to be grabbed from the
        dm-fence critical section (Boris Brezillon)
      - Add more references to the DRM GPUVM helpers (Danilo Krummrich and
        Boriz Brezillon)
      - Update the rfc/xe.rst document.
      
      v6:
      - Rework wording to improve readability (Boris Brezillon, Rodrigo Vivi,
        Bagas Sanjaya)
      - Various minor fixes across the document (Boris Brezillon)
      
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Signed-off-by: default avatarThomas Hellström <thomas.hellstrom@linux.intel.com>
      Reviewed-by: default avatarBoris Brezillon <boris.brezillon@collabora.com>
      Reviewed-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
      Reviewed-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Acked-by: John Hubbard <jhubbard@nvidia.com> # Documentation/core-api/pin_user_pages.rst changes
      Link: https://patchwork.freedesktop.org/patch/msgid/20231129090637.2629-1-thomas.hellstrom@linux.intel.com
      dad19630
  25. Nov 17, 2023
  26. Nov 01, 2023
  27. Oct 12, 2023
  28. Sep 11, 2023
    • Ard Biesheuvel's avatar
      Documentation: Drop or replace remaining mentions of IA64 · 94483490
      Ard Biesheuvel authored
      
      Drop or update mentions of IA64, as appropriate.
      
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      94483490
    • Ard Biesheuvel's avatar
      arch: Remove Itanium (IA-64) architecture · cf8e8658
      Ard Biesheuvel authored
      The Itanium architecture is obsolete, and an informal survey [0] reveals
      that any residual use of Itanium hardware in production is mostly HP-UX
      or OpenVMS based. The use of Linux on Itanium appears to be limited to
      enthusiasts that occasionally boot a fresh Linux kernel to see whether
      things are still working as intended, and perhaps to churn out some
      distro packages that are rarely used in practice.
      
      None of the original companies behind Itanium still produce or support
      any hardware or software for the architecture, and it is listed as
      'Orphaned' in the MAINTAINERS file, as apparently, none of the engineers
      that contributed on behalf of those companies (nor anyone else, for that
      matter) have been willing to support or maintain the architecture
      upstream or even be responsible for applying the odd fix. The Intel
      firmware team removed all IA-64 support from the Tianocore/EDK2
      reference implementation of EFI in 2018. (Itanium is the original
      architecture for which EFI was developed, and the way Linux supports it
      deviates significantly from other architectures.) Some distros, such as
      Debian and Gentoo, still maintain [unofficial] ia64 ports, but many have
      dropped support years ago.
      
      While the argument is being made [1] that there is a 'for the common
      good' angle to being able to build and run existing projects such as the
      Grid Community Toolkit [2] on Itanium for interoperability testing, the
      fact remains that none of those projects are known to be deployed on
      Linux/ia64, and very few people actually have access to such a system in
      the first place. Even if there were ways imaginable in which Linux/ia64
      could be put to good use today, what matters is whether anyone is
      actually doing that, and this does not appear to be the case.
      
      There are no emulators widely available, and so boot testing Itanium is
      generally infeasible for ordinary contributors. GCC still supports IA-64
      but its compile farm [3] no longer has any IA-64 machines. GLIBC would
      like to get rid of IA-64 [4] too because it would permit some overdue
      code cleanups. In summary, the benefits to the ecosystem of having IA-64
      be part of it are mostly theoretical, whereas the maintenance overhead
      of keeping it supported is real.
      
      So let's rip off the band aid, and remove the IA-64 arch code entirely.
      This follows the timeline proposed by the Debian/ia64 maintainer [5],
      which removes support in a controlled manner, leaving IA-64 in a known
      good state in the most recent LTS release. Other projects will follow
      once the kernel support is removed.
      
      [0] https://lore.kernel.org/all/CAMj1kXFCMh_578jniKpUtx_j8ByHnt=s7S+yQ+vGbKt9ud7+kQ@mail.gmail.com/
      [1] https://lore.kernel.org/all/0075883c-7c51-00f5-2c2d-5119c1820410@web.de/
      [2] https://gridcf.org/gct-docs/latest/index.html
      [3] https://cfarm.tetaneutral.net/machines/list/
      [4] https://lore.kernel.org/all/87bkiilpc4.fsf@mid.deneb.enyo.de/
      [5] https://lore.kernel.org/all/ff58a3e76e5102c94bb5946d99187b358def688a.camel@physik.fu-berlin.de/
      
      
      
      Acked-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      cf8e8658
  29. Aug 28, 2023
  30. Aug 24, 2023
    • Eric DeVolder's avatar
      crash: memory and CPU hotplug sysfs attributes · 88a6f899
      Eric DeVolder authored
      Introduce the crash_hotplug attribute for memory and CPUs for use by
      userspace.  These attributes directly facilitate the udev rule for
      managing userspace re-loading of the crash kernel upon hot un/plug
      changes.
      
      For memory, expose the crash_hotplug attribute to the
      /sys/devices/system/memory directory.  For example:
      
       # udevadm info --attribute-walk /sys/devices/system/memory/memory81
        looking at device '/devices/system/memory/memory81':
          KERNEL=="memory81"
          SUBSYSTEM=="memory"
          DRIVER==""
          ATTR{online}=="1"
          ATTR{phys_device}=="0"
          ATTR{phys_index}=="00000051"
          ATTR{removable}=="1"
          ATTR{state}=="online"
          ATTR{valid_zones}=="Movable"
      
        looking at parent device '/devices/system/memory':
          KERNELS=="memory"
          SUBSYSTEMS==""
          DRIVERS==""
          ATTRS{auto_online_blocks}=="offline"
          ATTRS{block_size_bytes}=="8000000"
          ATTRS{crash_hotplug}=="1"
      
      For CPUs, expose the crash_hotplug attribute to the
      /sys/devices/system/cpu directory. For example:
      
       # udevadm info --attribute-walk /sys/devices/system/cpu/cpu0
        looking at device '/devices/system/cpu/cpu0':
          KERNEL=="cpu0"
          SUBSYSTEM=="cpu"
          DRIVER=="processor"
          ATTR{crash_notes}=="277c38600"
          ATTR{crash_notes_size}=="368"
          ATTR{online}=="1"
      
        looking at parent device '/devices/system/cpu':
          KERNELS=="cpu"
          SUBSYSTEMS==""
          DRIVERS==""
          ATTRS{crash_hotplug}=="1"
          ATTRS{isolated}==""
          ATTRS{kernel_max}=="8191"
          ATTRS{nohz_full}=="  (null)"
          ATTRS{offline}=="4-7"
          ATTRS{online}=="0-3"
          ATTRS{possible}=="0-7"
          ATTRS{present}=="0-3"
      
      With these sysfs attributes in place, it is possible to efficiently
      instruct the udev rule to skip crash kernel reloading for kernels
      configured with crash hotplug support.
      
      For example, the following is the proposed udev rule change for RHEL
      system 98-kexec.rules (as the first lines of the rule file):
      
       # The kernel updates the crash elfcorehdr for CPU and memory changes
       SUBSYSTEM=="cpu", ATTRS{crash_hotplug}=="1", GOTO="kdump_reload_end"
       SUBSYSTEM=="memory", ATTRS{crash_hotplug}=="1", GOTO="kdump_reload_end"
      
      When examined in the context of 98-kexec.rules, the above rules test if
      crash_hotplug is set, and if so, the userspace initiated
      unload-then-reload of the crash kernel is skipped.
      
      CPU and memory checks are separated in accordance with CONFIG_HOTPLUG_CPU
      and CONFIG_MEMORY_HOTPLUG kernel config options.  If an architecture
      supports, for example, memory hotplug but not CPU hotplug, then the
      /sys/devices/system/memory/crash_hotplug attribute file is present, but
      the /sys/devices/system/cpu/crash_hotplug attribute file will NOT be
      present.  Thus the udev rule skips userspace processing of memory hot
      un/plug events, but the udev rule will evaluate false for CPU events, thus
      allowing userspace to process CPU hot un/plug events (ie the
      unload-then-reload of the kdump capture kernel).
      
      Link: https://lkml.kernel.org/r/20230814214446.6659-5-eric.devolder@oracle.com
      
      
      Signed-off-by: default avatarEric DeVolder <eric.devolder@oracle.com>
      Reviewed-by: default avatarSourabh Jain <sourabhjain@linux.ibm.com>
      Acked-by: default avatarHari Bathini <hbathini@linux.ibm.com>
      Acked-by: default avatarBaoquan He <bhe@redhat.com>
      Cc: Akhil Raj <lf32.dev@gmail.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov (AMD) <bp@alien8.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Mimi Zohar <zohar@linux.ibm.com>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Weißschuh <linux@weissschuh.net>
      Cc: Valentin Schneider <vschneid@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      88a6f899
    • Matthew Wilcox (Oracle)'s avatar
      mm: add orphaned kernel-doc to the rst files. · 61ff748b
      Matthew Wilcox (Oracle) authored
      There are many files in mm/ that contain kernel-doc which is not
      currently published on kernel.org.  Some of it is easily categorisable,
      but most of it is going into the miscellaneous documentation section to
      be organised later.
      
      Some files aren't ready to be included; they contain documentation with
      build errors.  Or they're nommu.c which duplicates documentation from
      "real" MMU systems.  Those files are noted with a # mark (although really
      anything which isn't a recognised directive would do to prevent inclusion)
      
      Link: https://lkml.kernel.org/r/20230818200630.2719595-5-willy@infradead.org
      
      
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Acked-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      61ff748b
    • Matthew Wilcox (Oracle)'s avatar
      mm: remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO · 29d26f12
      Matthew Wilcox (Oracle) authored
      Current best practice is to reuse the name of the function as a define to
      indicate that the function is implemented by the architecture.
      
      Link: https://lkml.kernel.org/r/20230802151406.3735276-6-willy@infradead.org
      
      
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Acked-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
      Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      29d26f12
    • Matthew Wilcox (Oracle)'s avatar
      mm: add generic flush_icache_pages() and documentation · 3a255267
      Matthew Wilcox (Oracle) authored
      flush_icache_page() is deprecated but not yet removed, so add a range
      version of it.  Change the documentation to refer to
      update_mmu_cache_range() instead of update_mmu_cache().
      
      Link: https://lkml.kernel.org/r/20230802151406.3735276-4-willy@infradead.org
      
      
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Acked-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
      Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3a255267
  31. Aug 08, 2023
Loading