- Dec 05, 2024
-
-
Andy Shevchenko authored
[ Upstream commit be4ca6c53e66cb275cf0d71f32dac0c4606b9dc0 ] The init_size description of boot protocol has an example of the runtime start address for the compressed bzImage. For non-relocatable kernel it relies on the pref_address value (if not 0), but for relocatable case only pays respect to the load_addres and kernel_alignment, and it is inaccurate for the latter. Boot loader must consider the pref_address as the Linux kernel relocates to it before being decompressed as nicely described in this commit message a year ago: 43b1d3e6 ("kexec: Allocate kernel above bzImage's pref_address") Due to this documentation inaccuracy some of the bootloaders (*) made a mistake in the calculations and if kernel image is big enough, this may lead to unbootable configurations. *) In particular, kexec-tools missed that and resently got a couple of changes which will be part of v2.0.30 release. For the record, commit 43b1d3e6 only fixed the kernel kexec implementation and also missed to update the init_size description. While at it, make an example C-like looking as it's done elsewhere in the document and fix indentation as presribed by the reStructuredText specifications, so the syntax highliting will work properly. Fixes: 43b1d3e6 ("kexec: Allocate kernel above bzImage's pref_address") Fixes: d297366b ("x86: document new bzImage fields") Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Randy Dunlap <rdunlap@infradead.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241125105005.1616154-1-andriy.shevchenko@linux.intel.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Oct 04, 2024
-
-
Easwar Hariharan authored
Add the Microsoft Azure Cobalt 100 CPU to the list of CPUs suffering from erratum 3194386 added in commit 75b3c43e ("arm64: errata: Expand speculative SSBS workaround") CC: Mark Rutland <mark.rutland@arm.com> CC: James More <james.morse@arm.com> CC: Will Deacon <will@kernel.org> CC: stable@vger.kernel.org # 6.6+ Signed-off-by:
Easwar Hariharan <eahariha@linux.microsoft.com> Link: https://lore.kernel.org/r/20241003225239.321774-1-eahariha@linux.microsoft.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Oct 02, 2024
-
-
Al Viro authored
asm/unaligned.h is always an include of asm-generic/unaligned.h; might as well move that thing to linux/unaligned.h and include that - there's nothing arch-specific in that header. auto-generated by the following: for i in `git grep -l -w asm/unaligned.h`; do sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i done for i in `git grep -l -w asm-generic/unaligned.h`; do sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i done git mv include/asm-generic/unaligned.h include/linux/unaligned.h git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
-
- Oct 01, 2024
-
-
Mark Rutland authored
A number of Arm Ltd CPUs suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. We worked around this for a number of CPUs in commits: * 7187bb7d ("arm64: errata: Add workaround for Arm errata 3194386 and 3312417") * 75b3c43e ("arm64: errata: Expand speculative SSBS workaround") * 145502cac7ea70b5 ("arm64: errata: Expand speculative SSBS workaround (again)") Since then, a (hopefully final) batch of updates have been published, with two more affected CPUs. For the affected CPUs the existing mitigation is sufficient, as described in their respective Software Developer Errata Notice (SDEN) documents: * Cortex-A715 (MP148) SDEN v15.0, erratum 3456084 https://developer.arm.com/documentation/SDEN-2148827/1500/ * Neoverse-N3 (MP195) SDEN v5.0, erratum 3456111 https://developer.arm.com/documentation/SDEN-3050973/0500/ Enable the existing mitigation by adding the relevant MIDRs to erratum_spec_ssbs_list, and update silicon-errata.rst and the Kconfig text accordingly. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240930111705.3352047-3-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 24, 2024
-
-
Huacai Chen authored
Introduce the advanced extended interrupt controllers (AVECINTC). This feature will allow each core to have 256 independent interrupt vectors and MSI interrupts can be independently routed to any vector on any CPU. The whole topology of irqchips in LoongArch machines looks like this if AVECINTC is supported: +-----+ +-----------------------+ +-------+ | IPI | --> | CPUINTC | <-- | Timer | +-----+ +-----------------------+ +-------+ ^ ^ ^ | | | +---------+ +----------+ +---------+ +-------+ | EIOINTC | | AVECINTC | | LIOINTC | <-- | UARTs | +---------+ +----------+ +---------+ +-------+ ^ ^ | | +---------+ +---------+ | PCH-PIC | | PCH-MSI | +---------+ +---------+ ^ ^ ^ | | | +---------+ +---------+ +---------+ | Devices | | PCH-LPC | | Devices | +---------+ +---------+ +---------+ ^ | +---------+ | Devices | +---------+ Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn> Signed-off-by:
Tianyang Zhang <zhangtianyang@loongson.cn>
-
- Sep 23, 2024
-
-
Jason J. Herne authored
Advertise features of the driver for the benefit of automated tooling like Libvirt and mdevctl. Signed-off-by:
Jason J. Herne <jjherne@linux.ibm.com> Reviewed-by:
Anthony Krowiak <akrowiak@linux.ibm.com> Reviewed-by:
Boris Fiuczynski <fiuczy@linux.ibm.com> Link: https://lore.kernel.org/r/20240916120123.11484-1-jjherne@linux.ibm.com Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
- Sep 05, 2024
-
-
Amit Vadhavana authored
Correct spelling mistakes in the documentation to improve readability. Signed-off-by:
Amit Vadhavana <av2082000@gmail.com> Reviewed-by:
Bjorn Helgaas <bhelgaas@google.com> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20240817072724.6861-1-av2082000@gmail.com
-
- Sep 04, 2024
-
-
Joey Gouly authored
Expose a HWCAP and ID_AA64MMFR3_EL1_S1POE to userspace, so they can be used to check if the CPU supports the feature. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240822151113.1479789-12-joey.gouly@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mike Rapoport (Microsoft) authored
NUMA emulation can be now enabled on arm64 and riscv in addition to x86. Move description of numa=fake parameters from x86 documentation of admin-guide/kernel-parameters.txt Link: https://lkml.kernel.org/r/20240807064110.1003856-27-rppt@kernel.org Suggested-by:
Zi Yan <ziy@nvidia.com> Signed-off-by:
Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [arm64 + CXL via QEMU] Acked-by:
Dan Williams <dan.j.williams@intel.com> Acked-by:
David Hildenbrand <david@redhat.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Rob Herring (Arm) <robh@kernel.org> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Aug 30, 2024
-
-
D Scott Phillips authored
The ampere1a cpu is affected by erratum AC04_CPU_10 which is the same bug as AC03_CPU_38. Add ampere1a to the AC03_CPU_38 workaround midr list. Cc: <stable@vger.kernel.org> Signed-off-by:
D Scott Phillips <scott@os.amperecomputing.com> Acked-by:
Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240827211701.2216719-1-scott@os.amperecomputing.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Aug 29, 2024
-
-
Charlie Jenkins authored
This mmap behavior caused unintended breakages so the behavior has been changed. Signed-off-by:
Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240826-riscv_mmap-v1-1-cd8962afe47f@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Aug 20, 2024
-
-
Yicong Yang authored
HiSilicon HIP10/11 platforms using the same SMMU PMCG with HIP09 and thus suffers the same erratum. List them in the PMCG platform information list without introducing a new SMMU PMCG Model. Update the silicon-errata.rst as well. Signed-off-by:
Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20240731092658.11012-1-yangyicong@huawei.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Aug 14, 2024
-
-
Evan Green authored
In preparation for misaligned vector performance hwprobe keys, rename the hwprobe key values associated with misaligned scalar accesses to include the term SCALAR. Leave the old defines in place to maintain source compatibility. This change is intended to be a functional no-op. Signed-off-by:
Evan Green <evan@rivosinc.com> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240809214444.3257596-3-evan@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Evan Green authored
RISCV_HWPROBE_KEY_CPUPERF_0 was mistakenly flagged as a bitmask in hwprobe_key_is_bitmask(), when in reality it was an enum value. This causes problems when used in conjunction with RISCV_HWPROBE_WHICH_CPUS, since SLOW, FAST, and EMULATED have values whose bits overlap with each other. If the caller asked for the set of CPUs that was SLOW or EMULATED, the returned set would also include CPUs that were FAST. Introduce a new hwprobe key, RISCV_HWPROBE_KEY_MISALIGNED_PERF, which returns the same values in response to a direct query (with no flags), but is properly handled as an enumerated value. As a result, SLOW, FAST, and EMULATED are all correctly treated as distinct values under the new key when queried with the WHICH_CPUS flag. Leave the old key in place to avoid disturbing applications which may have already come to rely on the key, with or without its broken behavior with respect to the WHICH_CPUS flag. Fixes: e178bf14 ("RISC-V: hwprobe: Introduce which-cpus flag") Signed-off-by:
Evan Green <evan@rivosinc.com> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Reviewed-by:
Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20240809214444.3257596-2-evan@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Aug 01, 2024
-
-
Mark Rutland authored
A number of Arm Ltd CPUs suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. We worked around this for a number of CPUs in commits: * 7187bb7d ("arm64: errata: Add workaround for Arm errata 3194386 and 3312417") * 75b3c43e ("arm64: errata: Expand speculative SSBS workaround") Since then, similar errata have been published for a number of other Arm Ltd CPUs, for which the same mitigation is sufficient. This is described in their respective Software Developer Errata Notice (SDEN) documents: * Cortex-A76 (MP052) SDEN v31.0, erratum 3324349 https://developer.arm.com/documentation/SDEN-885749/3100/ * Cortex-A77 (MP074) SDEN v19.0, erratum 3324348 https://developer.arm.com/documentation/SDEN-1152370/1900/ * Cortex-A78 (MP102) SDEN v21.0, erratum 3324344 https://developer.arm.com/documentation/SDEN-1401784/2100/ * Cortex-A78C (MP138) SDEN v16.0, erratum 3324346 https://developer.arm.com/documentation/SDEN-1707916/1600/ * Cortex-A78C (MP154) SDEN v10.0, erratum 3324347 https://developer.arm.com/documentation/SDEN-2004089/1000/ * Cortex-A725 (MP190) SDEN v5.0, erratum 3456106 https://developer.arm.com/documentation/SDEN-2832921/0500/ * Cortex-X1 (MP077) SDEN v21.0, erratum 3324344 https://developer.arm.com/documentation/SDEN-1401782/2100/ * Cortex-X1C (MP136) SDEN v16.0, erratum 3324346 https://developer.arm.com/documentation/SDEN-1707914/1600/ * Neoverse-N1 (MP050) SDEN v32.0, erratum 3324349 https://developer.arm.com/documentation/SDEN-885747/3200/ * Neoverse-V1 (MP076) SDEN v19.0, erratum 3324341 https://developer.arm.com/documentation/SDEN-1401781/1900/ Note that due to the manner in which Arm develops IP and tracks errata, some CPUs share a common erratum number and some CPUs have multiple erratum numbers for the same HW issue. On parts without SB, it is necessary to use ISB for the workaround. The spec_bar() macro used in the mitigation will expand to a "DSB SY; ISB" sequence in this case, which is sufficient on all affected parts. Enable the existing mitigation by adding the relevant MIDRs to erratum_spec_ssbs_list. The list is sorted alphanumerically (involving moving Neoverse-V3 after Neoverse-V2) so that this is easy to audit and potentially extend again in future. The Kconfig text is also updated to clarify the set of affected parts and the mitigation. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240801101803.1982459-4-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 26, 2024
-
-
Palmer Dabbelt authored
The RISC-V architecture makes a real time counter CSR (via RDTIME instruction) available for applications in U-mode but there is no architected mechanism for an application to discover the frequency the counter is running at. Some applications (e.g., DPDK) use the time counter for basic performance analysis as well as fine grained time-keeping. Add support to the hwprobe system call to export the time CSR frequency to code running in U-mode. Signed-off-by:
Yunhui Cui <cuiyunhui@bytedance.com> Reviewed-by:
Evan Green <evan@rivosinc.com> Reviewed-by:
Anup Patel <anup@brainfault.org> Acked-by:
Punit Agrawal <punit.agrawal@bytedance.com> Link: https://lore.kernel.org/r/20240702033731.71955-2-cuiyunhui@bytedance.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Stuart Menefy authored
This harmonizes all virtual addressing modes which can now all map (PGDIR_SIZE * PTRS_PER_PGD) / 4 of physical memory. The RISCV implementation of KASAN requires that the boundary between shallow mappings are aligned on an 8G boundary. In this case we need VMALLOC_START to be 8G aligned. So although we only need to move the start of the linear mapping down by 4GiB to allow 128GiB to be mapped, we actually move it down by 8GiB (creating a 4GiB hole between the linear mapping and KASAN shadow space) to maintain the alignment requirement. Signed-off-by:
Stuart Menefy <stuart.menefy@codasip.com> Reviewed-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240630110550.1731929-1-stuart.menefy@codasip.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jul 12, 2024
-
-
Andrew Jones authored
Export Zawrs ISA extension through hwprobe. [Palmer: there's a gap in the numbers here as there will be a merge conflict when this is picked up. To avoid confusion I just set the hwprobe ID to match what it would be post-merge.] Signed-off-by:
Andrew Jones <ajones@ventanamicro.com> Reviewed-by:
Clément Léger <cleger@rivosinc.com> Link: https://lore.kernel.org/r/20240426100820.14762-12-ajones@ventanamicro.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Christophe Leroy authored
Commit 732b32da ("powerpc: Remove core support for 40x") removed 40x. Update documentation accordingly. Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/c2d64bebc514ca892a12e51a68821a6317048d3a.1720694954.git.christophe.leroy@csgroup.eu
-
- Jul 11, 2024
-
-
Clément Léger authored
Some userspace applications (OpenJDK for instance) uses the free MSBs in pointers to insert additional information for their own logic and need to get this information from somewhere. Currently they rely on parsing /proc/cpuinfo "mmu=svxx" string to obtain the current value of virtual address usable bits [1]. Since this reflect the raw supported MMU mode, it might differ from the logical one used internally which is why arch_get_mmap_end() is used. Exporting the highest mmapable address through hwprobe will allow a more stable interface to be used. For that purpose, add a new hwprobe key named RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS which will export the highest userspace virtual address. Link: https://github.com/openjdk/jdk/blob/master/src/hotspot/os_cpu/linux_riscv/vm_version_linux_riscv.cpp#L171 [1] Signed-off-by:
Clément Léger <cleger@rivosinc.com> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240410144558.1104006-1-cleger@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jul 04, 2024
-
-
Kevin Brodsky authored
Most of memory.rst was written very early, at a time where TBI (Top Byte Ignore) was not enabled. Nowadays TBI0 is always enabled, and TBI1 may be enabled, depending on the kernel configuration. This means that VA bits 63:56 cannot generally be assumed to have any particular value. Regardless of TBI, TTBRx selection is done based on bit 55; update memory.rst accordingly. Signed-off-by:
Kevin Brodsky <kevin.brodsky@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20240702091349.356008-1-kevin.brodsky@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 03, 2024
-
-
Li Zhijian authored
When it was in text format, it correctly hardcoded steps 8a to 8c. However, after it was converted to RST, the sequence numbers were auto-generated during rendering and became incorrect after some steps were inserted. Change it to refer to steps a to c in a relative way. Signed-off-by:
Li Zhijian <lizhijian@fujitsu.com> [jc: Indented the line to make the relative reference more clear] Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20240614010028.48262-1-lizhijian@fujitsu.com
-
- Jul 02, 2024
-
-
Tony Luck authored
With Sub-NUMA Cluster (SNC) mode enabled, the scope of monitoring resources is per-NODE instead of per-L3 cache. Backwards compatibility is maintained by providing files in the mon_L3_XX directories that sum event counts for all SNC nodes sharing an L3 cache. New files provide per-SNC node event counts. Users should be aware that SNC mode also affects the amount of L3 cache available for allocation within each SNC node. Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by:
Reinette Chatre <reinette.chatre@intel.com> Tested-by:
Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/20240628215619.76401-20-tony.luck@intel.com
-
- Jul 01, 2024
-
-
Charlie Jenkins authored
ON/OFF in the keys was swapped between the first and second argument of the prctl. The prctl key is always PR_RISCV_SET_ICACHE_FLUSH_CTX, and the second argument can be PR_RISCV_CTX_SW_FENCEI_ON or PR_RISCV_CTX_SW_FENCEI_OFF. Signed-off-by:
Charlie Jenkins <charlie@rivosinc.com> Fixes: 6a08e470 ("documentation: Document PR_RISCV_SET_ICACHE_FLUSH_CTX prctl") Link: https://lore.kernel.org/r/20240628-fix_cmodx_example-v1-1-e6c6523bc163@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jun 28, 2024
-
-
James Morse authored
Add a description of physical and virtual CPU hotplug, explain the differences and elaborate on what is required in ACPI for a working virtual hotplug system. Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Tested-by:
Miguel Luis <miguel.luis@oracle.com> Reviewed-by:
Gavin Shan <gshan@redhat.com> Signed-off-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20240529133446.28446-19-Jonathan.Cameron@huawei.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jun 26, 2024
-
-
Clément Léger authored
Export Zcmop ISA extension through hwprobe. Signed-off-by:
Clément Léger <cleger@rivosinc.com> Reviewed-by:
Evan Green <evan@rivosinc.com> Link: https://lore.kernel.org/r/20240619113529.676940-15-cleger@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Clément Léger authored
Export Zca, Zcf, Zcd and Zcb ISA extension through hwprobe. Signed-off-by:
Clément Léger <cleger@rivosinc.com> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240619113529.676940-10-cleger@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Clément Léger authored
Export Zimop ISA extension through hwprobe. Signed-off-by:
Clément Léger <cleger@rivosinc.com> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240619113529.676940-4-cleger@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jun 17, 2024
-
-
Tony Luck authored
New CPU #defines encode vendor and family as well as model so "_FAM6" is no longer used in the #define names. Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20240611204814.353821-1-tony.luck@intel.com
-
- Jun 12, 2024
-
-
Mark Rutland authored
A number of Arm Ltd CPUs suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. We worked around this for Cortex-X4 and Neoverse-V3, in commit: 7187bb7d ("arm64: errata: Add workaround for Arm errata 3194386 and 3312417") ... as per their Software Developer Errata Notice (SDEN) documents: * Cortex-X4 SDEN v8.0, erratum 3194386: https://developer.arm.com/documentation/SDEN-2432808/0800/ * Neoverse-V3 SDEN v6.0, erratum 3312417: https://developer.arm.com/documentation/SDEN-2891958/0600/ Since then, similar errata have been published for a number of other Arm Ltd CPUs, for which the mitigation is the same. This is described in their respective SDEN documents: * Cortex-A710 SDEN v19.0, errataum 3324338 https://developer.arm.com/documentation/SDEN-1775101/1900/?lang=en * Cortex-A720 SDEN v11.0, erratum 3456091 https://developer.arm.com/documentation/SDEN-2439421/1100/?lang=en * Cortex-X2 SDEN v19.0, erratum 3324338 https://developer.arm.com/documentation/SDEN-1775100/1900/?lang=en * Cortex-X3 SDEN v14.0, erratum 3324335 https://developer.arm.com/documentation/SDEN-2055130/1400/?lang=en * Cortex-X925 SDEN v8.0, erratum 3324334 https://developer.arm.com/documentation/109108/800/?lang=en * Neoverse-N2 SDEN v17.0, erratum 3324339 https://developer.arm.com/documentation/SDEN-1982442/1700/?lang=en * Neoverse-V2 SDEN v9.0, erratum 3324336 https://developer.arm.com/documentation/SDEN-2332927/900/?lang=en Note that due to shared design lineage, some CPUs share the same erratum number. Add these to the existing mitigation under CONFIG_ARM64_ERRATUM_3194386. As listing all of the erratum IDs in the runtime description would be unwieldy, this is reduced to: "SSBS not fully self-synchronizing" ... matching the description of the errata in all of the SDENs. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-6-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Cortex-X4 erratum 3194386 and Neoverse-V3 erratum 3312417 are identical, with duplicate Kconfig text and some unsightly ifdeffery. While we try to share code behind CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS, having separate options results in a fair amount of boilerplate code, and this will only get worse as we expand the set of affected CPUs. To reduce this boilerplate, unify the two behind a common Kconfig option. This removes the duplicate text and Kconfig logic, and removes the need for the intermediate ARM64_WORKAROUND_SPECULATIVE_SSBS option. The set of affected CPUs is described as a list so that this can easily be extended. I've used ARM64_ERRATUM_3194386 (matching the Neoverse-V3 erratum ID) as the common option, matching the way we use ARM64_ERRATUM_1319367 to cover Cortex-A57 erratum 1319537 and Cortex-A72 erratum 1319367. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <wilL@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-5-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jun 11, 2024
-
-
Tom Lendacky authored
During early boot phases, check for the presence of an SVSM when running as an SEV-SNP guest. An SVSM is present if not running at VMPL0 and the 64-bit value at offset 0x148 into the secrets page is non-zero. If an SVSM is present, save the SVSM Calling Area address (CAA), located at offset 0x150 into the secrets page, and set the VMPL level of the guest, which should be non-zero, to indicate the presence of an SVSM. [ bp: Touchups. ] Signed-off-by:
Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/9d3fe161be93d4ea60f43c2a3f2c311fe708b63b.1717600736.git.thomas.lendacky@amd.com
-
- Jun 06, 2024
-
-
Gautam Menghani authored
Add support for using DPDES in the library for using guest state buffers. DPDES support is needed for enabling usage of doorbells in a L2 KVM on PAPR guest. Fixes: 6ccbbc33 ("KVM: PPC: Add helper library for Guest State Buffers") Cc: stable@vger.kernel.org # v6.7+ Signed-off-by:
Gautam Menghani <gautam@linux.ibm.com> Reviewed-by:
Nicholas Piggin <npiggin@gmail.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240605113913.83715-2-gautam@linux.ibm.com
-
- May 30, 2024
-
-
Andy Chiu authored
The following Vector subextensions for "embedded" platforms are added into RISCV_HWPROBE_KEY_IMA_EXT_0: - ZVE32X - ZVE32F - ZVE64X - ZVE64F - ZVE64D Extensions ending with an X indicates that the platform doesn't have a vector FPU. Extensions ending with F/D mean that whether single (F) or double (D) precision vector operation is supported. The number 32 or 64 follows from ZVE tells the maximum element length. Signed-off-by:
Andy Chiu <andy.chiu@sifive.com> Reviewed-by:
Clément Léger <cleger@rivosinc.com> Link: https://lore.kernel.org/r/20240510-zve-detection-v5-6-0711bdd26c12@sifive.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
We're stuck supporting scalar misaligned loads in userspace because they were part of the ISA at the time we froze the uABI. That wasn't the case for vector misaligned accesses, so depending on them unconditionally is a userspace bug. All extant vector hardware traps on these misaligned accesses. Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240524185600.5919-1-palmer@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- May 10, 2024
-
-
Mark Rutland authored
Cortex-X4 and Neoverse-V3 suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. This is described in their Software Developer Errata Notice (SDEN) documents: * Cortex-X4 SDEN v8.0, erratum 3194386: https://developer.arm.com/documentation/SDEN-2432808/0800/ * Neoverse-V3 SDEN v6.0, erratum 3312417: https://developer.arm.com/documentation/SDEN-2891958/0600/ To workaround these errata, it is necessary to place a speculation barrier (SB) after MSR to the SSBS special-purpose register. This patch adds the requisite SB after writes to SSBS within the kernel, and hides the presence of SSBS from EL0 such that userspace software which cares about SSBS will manipulate this via prctl(PR_GET_SPECULATION_CTRL, ...). Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240508081400.235362-5-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- May 06, 2024
-
-
Benjamin Gray authored
Documents how to use the PR_PPC_GET_DEXCR and PR_PPC_SET_DEXCR prctl()'s for changing a process's DEXCR or its process tree default value. Signed-off-by:
Benjamin Gray <bgray@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240417112325.728010-10-bgray@linux.ibm.com
-
- May 02, 2024
-
-
Remington Brasga authored
Fix spelling and grammar in Docs descriptions Signed-off-by:
Remington Brasga <rbrasga@uci.edu> Reviewed-by:
Randy Dunlap <rdunlap@infradead.org> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20240429225527.2329-1-rbrasga@uci.edu
-
- Apr 29, 2024
-
-
Sourabh Jain authored
The patch titled ("powerpc: make fadump resilient with memory add/remove events") has made significant changes to the implementation of fadump, particularly on elfcorehdr creation and fadump crash info header structure. Therefore, updating the fadump implementation documentation to reflect those changes. Following updates are done to firmware assisted dump documentation: 1. The elfcorehdr is no longer stored after fadump HDR in the reserved dump area. Instead, the second kernel dynamically allocates memory for the elfcorehdr within the address range from 0 to the boot memory size. Therefore, update figures 1 and 2 of Memory Reservation during the first and second kernels to reflect this change. 2. A version field has been added to the fadump header to manage the future changes to fadump crash info header structure without changing the fadump header magic number in the future. Therefore, remove the corresponding TODO from the document. Signed-off-by:
Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240422195932.1583833-4-sourabhjain@linux.ibm.com
-
- Apr 28, 2024
-
-
Clément Léger authored
Export the Zihintpause ISA extension through hwprobe which allows using "pause" instructions. Some userspace applications (OpenJDK for instance) uses this to handle some locking back-off. Signed-off-by:
Clément Léger <cleger@rivosinc.com> Reviewed-by:
Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20240221083108.1235311-1-cleger@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-