- Aug 02, 2024
-
-
Javier Carrasco authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Javier Carrasco <javier.carrasco.cruz@gmail.com> Link: https://lore.kernel.org/r/20240730-clang-format-for-each-macro-update-v2-1-254fca862c97@gmail.com Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Jun 26, 2024
-
-
SeongJae Park authored
'clang-format' is on 'Other material' section of 'process/index', but it may fit more under 'dev-tools/' directory. Move it. Signed-off-by:
SeongJae Park <sj@kernel.org> Acked-by:
Miguel Ojeda <ojeda@kernel.org> Acked-by:
Federico Vaga <federico.vaga@vaga.pv.it> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20240624185312.94537-5-sj@kernel.org
-
- Dec 08, 2023
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Elliot Berman authored
Add maple tree's for_each macros so clang-format operates correctly on {mt,mas}_for_each. Signed-off-by:
Elliot Berman <quic_eberman@quicinc.com> Link: https://lore.kernel.org/r/20231208-clang-format-mt-for-each-v1-1-b4b73186b886@quicinc.com [ Sorted properly. ] Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- May 23, 2023
-
-
Jason Gunthorpe authored
Convenience macro to iterate over every struct group_device in the group. Replace all open coded list_for_each_entry's with this macro. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Tested-by:
Heiko Stuebner <heiko@sntech.de> Tested-by:
Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by:
Joerg Roedel <jroedel@suse.de>
-
- Apr 18, 2023
-
-
Lukas Wunner authored
The PCI core has just been amended to create a pci_doe_mb struct for every DOE instance on device enumeration. CXL (the only in-tree DOE user so far) has been migrated to use those mailboxes instead of creating its own. That leaves pcim_doe_create_mb() and pci_doe_for_each_off() without any callers, so drop them. pci_doe_supports_prot() is now only used internally, so declare it static. pci_doe_destroy_mb() is no longer used as callback for devm_add_action(), so refactor it to accept a struct pci_doe_mb pointer instead of a generic void pointer. Because pci_doe_create_mb() is only called on device enumeration, i.e. before driver binding, the workqueue name never contains a driver name. So replace dev_driver_string() with dev_bus_name() when generating the workqueue name. Tested-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Lukas Wunner <lukas@wunner.de> Reviewed-by:
Ming Li <ming4.li@intel.com> Reviewed-by:
Ira Weiny <ira.weiny@intel.com> Reviewed-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by:
Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/64f614b6584982986c55d2c6229b4ee2b276dd59.1678543498.git.lukas@wunner.de Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Apr 04, 2023
-
-
Mika Westerberg authored
Instead of open-coding it everywhere introduce a tiny helper that can be used to iterate over each resource of a PCI device, and convert the most obvious users into it. While at it drop doubled empty line before pdev_sort_resources(). No functional changes intended. Suggested-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20230330162434.35055-4-andriy.shevchenko@linux.intel.com Signed-off-by:
Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com> Reviewed-by:
Krzysztof Wilczyński <kw@linux.com>
-
- Mar 05, 2023
-
-
Linus Torvalds authored
Commit aa47a7c2 ("lib/cpumask: deprecate nr_cpumask_bits") resulted in the cpumask operations potentially becoming hugely less efficient, because suddenly the cpumask was always considered to be variable-sized. The optimization was then later added back in a limited form by commit 6f9c07be ("lib/cpumask: add FORCE_NR_CPUS config option"), but that FORCE_NR_CPUS option is not useful in a generic kernel and more of a special case for embedded situations with fixed hardware. Instead, just re-introduce the optimization, with some changes. Instead of depending on CPUMASK_OFFSTACK being false, and then always using the full constant cpumask width, this introduces three different cpumask "sizes": - the exact size (nr_cpumask_bits) remains identical to nr_cpu_ids. This is used for situations where we should use the exact size. - the "small" size (small_cpumask_bits) is the NR_CPUS constant if it fits in a single word and the bitmap operations thus end up able to trigger the "small_const_nbits()" optimizations. This is used for the operations that have optimized single-word cases that get inlined, notably the bit find and scanning functions. - the "large" size (large_cpumask_bits) is the NR_CPUS constant if it is an sufficiently small constant that makes simple "copy" and "clear" operations more efficient. This is arbitrarily set at four words or less. As a an example of this situation, without this fixed size optimization, cpumask_clear() will generate code like movl nr_cpu_ids(%rip), %edx addq $63, %rdx shrq $3, %rdx andl $-8, %edx callq memset@PLT on x86-64, because it would calculate the "exact" number of longwords that need to be cleared. In contrast, with this patch, using a MAX_CPU of 64 (which is quite a reasonable value to use), the above becomes a single movq $0,cpumask instruction instead, because instead of caring to figure out exactly how many CPU's the system has, it just knows that the cpumask will be a single word and can just clear it all. Note that this does end up tightening the rules a bit from the original version in another way: operations that set bits in the cpumask are now limited to the actual nr_cpu_ids limit, whereas we used to do the nr_cpumask_bits thing almost everywhere in the cpumask code. But if you just clear bits, or scan for bits, we can use the simpler compile-time constants. In the process, remove 'cpumask_complement()' and 'for_each_cpu_not()' which were not useful, and which fundamentally have to be limited to 'nr_cpu_ids'. Better remove them now than have somebody introduce use of them later. Of course, on x86-64 with MAXSMP there is no sane small compile-time constant for the cpumask sizes, and we end up using the actual CPU bits, and will generate the above kind of horrors regardless. Please don't use MAXSMP unless you really expect to have machines with thousands of cores. Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jan 22, 2023
-
-
Jacopo Mondi authored
Add a for_each_active_route() macro to replace the repeated pattern of iterating on the active routes of a routing table. Signed-off-by:
Jacopo Mondi <jacopo+renesas@jmondi.org> Signed-off-by:
Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@kernel.org>
-
- Dec 02, 2022
-
-
John Ogness authored
Provide an NMI-safe SRCU protected variant to walk the console list. Note that all console fields are now set before adding the console to the list to avoid the console becoming visible by SCRU readers before being fully initialized. This is a preparatory change for a new console infrastructure which operates independent of the console BKL. Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
John Ogness <john.ogness@linutronix.de> Acked-by:
Miguel Ojeda <ojeda@kernel.org> Reviewed-by:
Paul E. McKenney <paulmck@kernel.org> Reviewed-by:
Petr Mladek <pmladek@suse.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20221116162152.193147-4-john.ogness@linutronix.de
-
- Dec 01, 2022
-
-
Florian Westphal authored
ping_lookup() does not acquire the table spinlock, so iteration should use hlist_nulls_for_each_entry_rcu(). Spotted during code review. Fixes: dbca1596 ("ping: convert to RCU lookups, get rid of rwlock") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by:
Florian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20221129140644.28525-1-fw@strlen.de Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jason Gunthorpe authored
This is the remainder of the IOAS data structure. Provide an object called an io_pagetable that is composed of iopt_areas pointing at iopt_pages, along with a list of iommu_domains that mirror the IOVA to PFN map. At the top this is a simple interval tree of iopt_areas indicating the map of IOVA to iopt_pages. An xarray keeps track of a list of domains. Based on the attached domains there is a minimum alignment for areas (which may be smaller than PAGE_SIZE), an interval tree of reserved IOVA that can't be mapped and an IOVA of allowed IOVA that can always be mappable. The concept of an 'access' refers to something like a VFIO mdev that is accessing the IOVA and using a 'struct page *' for CPU based access. Externally an API is provided that matches the requirements of the IOCTL interface for map/unmap and domain attachment. The API provides a 'copy' primitive to establish a new IOVA map in a different IOAS from an existing mapping by re-using the iopt_pages. This is the basic mechanism to provide single pinning. This is designed to support a pre-registration flow where userspace would setup an dummy IOAS with no domains, map in memory and then establish an access to pin all PFNs into the xarray. Copy can then be used to create new IOVA mappings in a different IOAS, with iommu_domains attached. Upon copy the PFNs will be read out of the xarray and mapped into the iommu_domains, avoiding any pin_user_pages() overheads. Link: https://lore.kernel.org/r/10-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Tested-by:
Yi Liu <yi.l.liu@intel.com> Tested-by:
Lixiao Yang <lixiao.yang@intel.com> Tested-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Signed-off-by:
Yi Liu <yi.l.liu@intel.com> Signed-off-by:
Nicolin Chen <nicolinc@nvidia.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The top of the data structure provides an IO Address Space (IOAS) that is similar to a VFIO container. The IOAS allows map/unmap of memory into ranges of IOVA called iopt_areas. Multiple IOMMU domains (IO page tables) and in-kernel accesses (like VFIO mdevs) can be attached to the IOAS to access the PFNs that those IOVA areas cover. The IO Address Space (IOAS) datastructure is composed of: - struct io_pagetable holding the IOVA map - struct iopt_areas representing populated portions of IOVA - struct iopt_pages representing the storage of PFNs - struct iommu_domain representing each IO page table in the system IOMMU - struct iopt_pages_access representing in-kernel accesses of PFNs (ie VFIO mdevs) - struct xarray pinned_pfns holding a list of pages pinned by in-kernel accesses This patch introduces the lowest part of the datastructure - the movement of PFNs in a tiered storage scheme: 1) iopt_pages::pinned_pfns xarray 2) Multiple iommu_domains 3) The origin of the PFNs, i.e. the userspace pointer PFN have to be copied between all combinations of tiers, depending on the configuration. The interface is an iterator called a 'pfn_reader' which determines which tier each PFN is stored and loads it into a list of PFNs held in a struct pfn_batch. Each step of the iterator will fill up the pfn_batch, then the caller can use the pfn_batch to send the PFNs to the required destination. Repeating this loop will read all the PFNs in an IOVA range. The pfn_reader and pfn_batch also keep track of the pinned page accounting. While PFNs are always stored and accessed as full PAGE_SIZE units the iommu_domain tier can store with a sub-page offset/length to support IOMMUs with a smaller IOPTE size than PAGE_SIZE. Link: https://lore.kernel.org/r/8-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Tested-by:
Yi Liu <yi.l.liu@intel.com> Tested-by:
Lixiao Yang <lixiao.yang@intel.com> Tested-by:
Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
- Nov 29, 2022
-
-
Jason Gunthorpe authored
The span iterator travels over the indexes of the interval_tree, not the nodes, and classifies spans of indexes as either 'used' or 'hole'. 'used' spans are fully covered by nodes in the tree and 'hole' spans have no node intersecting the span. This is done greedily such that spans are maximally sized and every iteration step switches between used/hole. As an example a trivial allocator can be written as: for (interval_tree_span_iter_first(&span, itree, 0, ULONG_MAX); !interval_tree_span_iter_done(&span); interval_tree_span_iter_next(&span)) if (span.is_hole && span.last_hole - span.start_hole >= allocation_size - 1) return span.start_hole; With all the tricky boundary conditions handled by the library code. The following iommufd patches have several algorithms for its overlapping node interval trees that are significantly simplified with this kind of iteration primitive. As it seems generally useful, put it into lib/. Link: https://lore.kernel.org/r/3-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Tested-by:
Yi Liu <yi.l.liu@intel.com> Tested-by:
Lixiao Yang <lixiao.yang@intel.com> Tested-by:
Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
- Jul 19, 2022
-
-
Jonathan Cameron authored
Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based mailbox with standard protocol discovery. Each mailbox is accessed through a DOE Extended Capability. Each DOE mailbox must support the DOE discovery protocol in addition to any number of additional protocols. Define core PCIe functionality to manage a single PCIe DOE mailbox at a defined config space offset. Functionality includes iterating, creating, query of supported protocol, and task submission. Destruction of the mailboxes is device managed. Cc: "Li, Ming" <ming4.li@intel.com> Cc: Bjorn Helgaas <helgaas@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Acked-by:
Bjorn Helgaas <helgaas@kernel.org> Signed-off-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20220719205249.566684-4-ira.weiny@intel.com Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- May 20, 2022
-
-
Brian Norris authored
Set SpaceBeforeParens to ControlStatementsExceptForEachMacros to not add space between a for_each macro and the following parenthesis. This option is available since clang-format-11 [1] and is in line with the checkpatch.pl rules [2]. I found that this patch has also been sent by Brian Norris some weeks ago [3]. Link: https://clang.llvm.org/docs/ClangFormatStyleOptions.html [1] Link: https://lore.kernel.org/r/8b6b252b-47a6-9d52-f0bd-10d3bc4ad244@digikod.net [2] Link: https://lore.kernel.org/lkml/YmHuZjmP9MxkgJ0R@google.com/ [3] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by:
Brian Norris <briannorris@chromium.org> Co-developed-by:
Mickaël Salaün <mic@digikod.net> Signed-off-by:
Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-6-mic@digikod.net [Adjusted authorship as agreed] Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Mickaël Salaün authored
Thanks to IndentGotoLabels introduced with clang-format-10 [1], we can avoid goto labels identation. This follows the current coding style and it is then in line with the checkpatch.pl rules [2]. Link: https://clang.llvm.org/docs/ClangFormatStyleOptions.html [1] Link: https://lore.kernel.org/r/8b6b252b-47a6-9d52-f0bd-10d3bc4ad244@digikod.net [2] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by:
Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-4-mic@digikod.net [Updated header comment to >= 10] Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Mickaël Salaün authored
We get new interesting formating with clang-format greater or equal to 6 as stated in the removed comments. Miguel Ojeda suggested to even move the minimal clang-format version to 11, which is the minimum LLVM supported at the moment [1]. Automatically updated with: sed -i 's/^\(\s*\)#\(\S*\s\+\S*\) # Unknown to clang-format.*/\1\2/' .clang-format Link: https://lore.kernel.org/r/CANiq72nLOfmEt-CZBmm2ouEB_x6Jm9ggDVFCVJxYxKw7O0LTzQ@mail.gmail.com [1] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by:
Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-3-mic@digikod.net Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Mickaël Salaün authored
Add tools/ to the shell fragment generating the for_each list and update it. This is useful to format files in the tools directory (e.g. selftests) with the same coding style as the kernel. Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by:
Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-2-mic@digikod.net [Reworded and rebased on top of previous commits] Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Miguel Ojeda authored
Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Miguel Ojeda authored
This avoids differences when different people run the command, which is relevant for our use case, e.g.: $ LC_ALL=en_US.UTF-8 sort test ata_for_each_link __ata_qc_for_each ata_qc_for_each $ LC_ALL=C sort test __ata_qc_for_each ata_for_each_link ata_qc_for_each Link: https://lore.kernel.org/lkml/CANiq72=7=ZpAObWRmposOmnyZ8XR_eNHCBtA3bu3fusmcPUwDA@mail.gmail.com/ Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. This brings it up to date, so that the next patches that tweak it further are more clear on what they change. Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Dec 16, 2021
-
-
Thomas Gleixner authored
There is no real reason to do several loops over the MSI descriptors instead of just doing one loop. In case of an error everything is undone anyway so it does not matter whether it's a partial or a full rollback. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Michael Kelley <mikelley@microsoft.com> Tested-by:
Nishanth Menon <nm@ti.com> Reviewed-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20211206210749.010234767@linutronix.de
-
- May 12, 2021
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Feb 17, 2021
-
-
Ben Widawsky authored
Add a straightforward IOCTL that provides a mechanism for userspace to query the supported memory device commands. CXL commands as they appear to userspace are described as part of the UAPI kerneldoc. The command list returned via this IOCTL will contain the full set of commands that the driver supports, however, some of those commands may not be available for use by userspace. Memory device commands first appear in the CXL 2.0 specification. They are submitted through a mailbox mechanism specified in the CXL 2.0 specification. The send command allows userspace to issue mailbox commands directly to the hardware. The list of available commands to send are the output of the query command. The driver verifies basic properties of the command and possibly inspect the input (or output) payload to determine whether or not the command is allowed (or might taint the kernel). Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Cc: Al Viro <viro@zeniv.linux.org.uk> Link: https://lore.kernel.org/r/20210217040958.1354670-5-ben.widawsky@intel.com Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Jan 29, 2021
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Oct 14, 2020
-
-
Mike Rapoport authored
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its users. Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Baoquan He <bhe@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> [x86] Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Iteration over memblock.reserved with for_each_reserved_mem_region() used __next_reserved_mem_region() that implemented a subset of __next_mem_region(). Use __for_each_mem_range() and, essentially, __next_mem_region() with appropriate parameters to reduce code duplication. While on it, rename for_each_reserved_mem_region() to for_each_reserved_mem_range() for consistency. Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-17-rppt@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Currently for_each_mem_range() and for_each_mem_range_rev() iterators are the most generic way to traverse memblock regions. As such, they have 8 parameters and they are hardly convenient to users. Most users choose to utilize one of their wrappers and the only user that actually needs most of the parameters is memblock itself. To avoid yet another naming for memblock iterators, rename the existing for_each_mem_range[_rev]() to __for_each_mem_range[_rev]() and add a new for_each_mem_range[_rev]() wrappers with only index, start and end parameters. The new wrapper nicely fits into init_unavailable_mem() and will be used in upcoming changes to simplify memblock traversals. Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-11-rppt@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Sep 09, 2020
-
-
Jason Gunthorpe authored
This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by:
Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
- Sep 01, 2020
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- May 25, 2020
-
-
Omar Sandoval authored
An upcoming Btrfs fix needs to know the original size of a non-cloned bios. Rather than accessing the bvec table directly, let's add a bio_for_each_bvec_all() accessor. Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by:
Omar Sandoval <osandov@fb.com> Signed-off-by:
David Sterba <dsterba@suse.com>
-
- Apr 18, 2020
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
Ian Rogers authored
This change doesn't affect existing code. Inner namespace indentation can lead to a lot of indentation in the case of anonymous namespaces and the like, impeding readability. Of the clang-format builtin styles LLVM, Google, Chromium and Mozilla use None while WebKit uses Inner. Signed-off-by:
Ian Rogers <irogers@google.com> Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- Mar 06, 2020
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- Aug 31, 2019
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list. Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- Apr 12, 2019
-
-
Miguel Ojeda authored
Re-run the shell fragment that generated the original list now that there are two dozens of new entries after v5.1's merge window. Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- Mar 21, 2019
-
-
NeilBrown authored
The pattern set by list.h is that for_each..continue() iterators start at the next entry after the given one, while for_each..from() iterators start at the given entry. The rht_for_each*continue() iterators are documented as though the start at the 'next' entry, but actually start at the given entry, and they are used expecting that behaviour. So fix the documentation and change the names to *from for consistency with list.h Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Acked-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Signed-off-by:
NeilBrown <neilb@suse.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 19, 2019
-
-
Jason Gunthorpe authored
We have many loops iterating over all of the end port numbers on a struct ib_device, simplify them with a for_each helper. Reviewed-by:
Parav Pandit <parav@mellanox.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com>
-
- Feb 11, 2019
-
-
Jason Gunthorpe authored
Commit 2db76d7c ("lib/scatterlist: sg_page_iter: support sg lists w/o backing pages") introduced the sg_page_iter_dma_address() function without providing a way to use it in the general case. If the sg_dma_len() is not equal to the sg length callers cannot safely use the for_each_sg_page/sg_page_iter_dma_address combination. Resolve this API mistake by providing a DMA specific iterator, for_each_sg_dma_page(), that uses the right length so sg_page_iter_dma_address() works as expected with all sglists. A new iterator type is introduced to provide compile-time safety against wrongly mixing accessors and iterators. Acked-by: Christoph Hellwig <hch@lst.de> (for scatterlist) Acked-by:
Thomas Hellstrom <thellstrom@vmware.com> Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> (ipu3-cio2) Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com>
-