Skip to content
Snippets Groups Projects
  1. Feb 22, 2023
  2. Feb 09, 2023
  3. Oct 04, 2022
  4. Sep 28, 2022
    • Al Viro's avatar
      [coredump] don't use __kernel_write() on kmap_local_page() · 06bbaa6d
      Al Viro authored
      
      passing kmap_local_page() result to __kernel_write() is unsafe -
      random ->write_iter() might (and 9p one does) get unhappy when
      passed ITER_KVEC with pointer that came from kmap_local_page().
      
      Fix by providing a variant of __kernel_write() that takes an iov_iter
      from caller (__kernel_write() becomes a trivial wrapper) and adding
      dump_emit_page() that parallels dump_emit(), except that instead of
      __kernel_write() it uses __kernel_write_iter() with ITER_BVEC source.
      
      Fixes: 3159ed57 "fs/coredump: use kmap_local_page()"
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      06bbaa6d
  5. Sep 27, 2022
  6. Sep 07, 2022
    • Peter Zijlstra's avatar
      freezer,sched: Rewrite core freezer logic · f5d39b02
      Peter Zijlstra authored
      
      Rewrite the core freezer to behave better wrt thawing and be simpler
      in general.
      
      By replacing PF_FROZEN with TASK_FROZEN, a special block state, it is
      ensured frozen tasks stay frozen until thawed and don't randomly wake
      up early, as is currently possible.
      
      As such, it does away with PF_FROZEN and PF_FREEZER_SKIP, freeing up
      two PF_flags (yay!).
      
      Specifically; the current scheme works a little like:
      
      	freezer_do_not_count();
      	schedule();
      	freezer_count();
      
      And either the task is blocked, or it lands in try_to_freezer()
      through freezer_count(). Now, when it is blocked, the freezer
      considers it frozen and continues.
      
      However, on thawing, once pm_freezing is cleared, freezer_count()
      stops working, and any random/spurious wakeup will let a task run
      before its time.
      
      That is, thawing tries to thaw things in explicit order; kernel
      threads and workqueues before doing bringing SMP back before userspace
      etc.. However due to the above mentioned races it is entirely possible
      for userspace tasks to thaw (by accident) before SMP is back.
      
      This can be a fatal problem in asymmetric ISA architectures (eg ARMv9)
      where the userspace task requires a special CPU to run.
      
      As said; replace this with a special task state TASK_FROZEN and add
      the following state transitions:
      
      	TASK_FREEZABLE	-> TASK_FROZEN
      	__TASK_STOPPED	-> TASK_FROZEN
      	__TASK_TRACED	-> TASK_FROZEN
      
      The new TASK_FREEZABLE can be set on any state part of TASK_NORMAL
      (IOW. TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE) -- any such state
      is already required to deal with spurious wakeups and the freezer
      causes one such when thawing the task (since the original state is
      lost).
      
      The special __TASK_{STOPPED,TRACED} states *can* be restored since
      their canonical state is in ->jobctl.
      
      With this, frozen tasks need an explicit TASK_FROZEN wakeup and are
      free of undue (early / spurious) wakeups.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Link: https://lore.kernel.org/r/20220822114649.055452969@infradead.org
      f5d39b02
    • Peter Zijlstra's avatar
      sched: Add TASK_ANY for wait_task_inactive() · f9fc8cad
      Peter Zijlstra authored
      
      Now that wait_task_inactive()'s @match_state argument is a mask (like
      ttwu()) it is possible to replace the special !match_state case with
      an 'all-states' value such that any blocked state will match.
      
      Suggested-by: default avatarIngo Molnar <(mingo@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/YxhkzfuFTvRnpUaH@hirez.programming.kicks-ass.net
      f9fc8cad
  7. Jul 20, 2022
    • Eric W. Biederman's avatar
      signal: Drop signals received after a fatal signal has been processed · 9a95f78e
      Eric W. Biederman authored
      In 403bad72 ("coredump: only SIGKILL should interrupt the
      coredumping task") Oleg modified the kernel to drop all signals that
      come in during a coredump except SIGKILL, and suggested that it might
      be a good idea to generalize that to other cases after the process has
      received a fatal signal.
      
      Semantically it does not make sense to perform any signal delivery
      after the process has already been killed.
      
      When a signal is sent while a process is dying today the signal is
      placed in the signal queue by __send_signal and a single task of the
      process is woken up with signal_wake_up, if there are any tasks that
      have not set PF_EXITING.
      
      Take things one step farther and have prepare_signal report that all
      signals that come after a process has been killed should be ignored.
      While retaining the historical exception of allowing SIGKILL to
      interrupt coredumps.
      
      Update the comment in fs/coredump.c to make it clear coredumps are
      special in being able to receive SIGKILL.
      
      This changes things so that a process stopped in PTRACE_EVENT_EXIT can
      not be made to escape it's ptracer and finish exiting by sending it
      SIGKILL.  That a process can be made to leave PTRACE_EVENT_EXIT and
      escape it's tracer by sending the process a SIGKILL has been
      complicating tracer's for no apparent advantage.  If the process needs
      to be made to leave PTRACE_EVENT_EXIT all that needs to happen is to
      kill the proceses's tracer.  This differs from the coredump code where
      there is no other mechanism besides honoring SIGKILL to expedite the
      end of coredumping.
      
      Link: https://lkml.kernel.org/r/875yksd4s9.fsf_-_@email.froward.int.ebiederm.org
      
      
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      9a95f78e
  8. Jul 16, 2022
  9. Mar 10, 2022
  10. Mar 08, 2022
  11. Mar 02, 2022
  12. Jan 22, 2022
  13. Jan 08, 2022
    • Eric W. Biederman's avatar
      signal: Remove the helper signal_group_exit · 49697335
      Eric W. Biederman authored
      This helper is misleading.  It tests for an ongoing exec as well as
      the process having received a fatal signal.
      
      Sometimes it is appropriate to treat an on-going exec differently than
      a process that is shutting down due to a fatal signal.  In particular
      taking the fast path out of exit_signals instead of retargeting
      signals is not appropriate during exec, and not changing the the exit
      code in do_group_exit during exec.
      
      Removing the helper makes it more obvious what is going on as both
      cases must be coded for explicitly.
      
      While removing the helper fix the two cases where I have observed
      using signal_group_exit resulted in the wrong result.
      
      In exit_signals only test for SIGNAL_GROUP_EXIT so that signals are
      retargetted during an exec.
      
      In do_group_exit use 0 as the exit code during an exec as de_thread
      does not set group_exit_code.  As best as I can determine
      group_exit_code has been is set to 0 most of the time during
      de_thread.  During a thread group stop group_exit_code is set to the
      stop signal and when the thread group receives SIGCONT group_exit_code
      is reset to 0.
      
      Link: https://lkml.kernel.org/r/20211213225350.27481-8-ebiederm@xmission.com
      
      
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      49697335
    • Eric W. Biederman's avatar
      coredump: Stop setting signal->group_exit_task · 6ac79ec5
      Eric W. Biederman authored
      Currently the coredump code sets group_exit_task so that
      signal_group_exit() will return true during a coredump.  Now that the
      coredump code always sets SIGNAL_GROUP_EXIT there is no longer a need
      to set signal->group_exit_task.
      
      Link: https://lkml.kernel.org/r/20211213225350.27481-6-ebiederm@xmission.com
      
      
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      6ac79ec5
    • Eric W. Biederman's avatar
      signal: Remove SIGNAL_GROUP_COREDUMP · 2f824d4d
      Eric W. Biederman authored
      After the previous cleanups "signal->core_state" is set whenever
      SIGNAL_GROUP_COREDUMP is set and "signal->core_state" is tested
      whenver the code wants to know if a coredump is in progress.  The
      remaining tests of SIGNAL_GROUP_COREDUMP also test to see if
      SIGNAL_GROUP_EXIT is set.  Similarly the only place that sets
      SIGNAL_GROUP_COREDUMP also sets SIGNAL_GROUP_EXIT.
      
      Which makes SIGNAL_GROUP_COREDUMP unecessary and redundant. So stop
      setting SIGNAL_GROUP_COREDUMP, stop testing SIGNAL_GROUP_COREDUMP, and
      remove it's definition.
      
      With the setting of SIGNAL_GROUP_COREDUMP gone, coredump_finish no
      longer needs to clear SIGNAL_GROUP_COREDUMP out of signal->flags
      by setting SIGNAL_GROUP_EXIT.
      
      Link: https://lkml.kernel.org/r/20211213225350.27481-5-ebiederm@xmission.com
      
      
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      2f824d4d
    • Eric W. Biederman's avatar
      signal: During coredumps set SIGNAL_GROUP_EXIT in zap_process · 752dc970
      Eric W. Biederman authored
      There are only a few places that test SIGNAL_GROUP_EXIT and
      are not also already testing SIGNAL_GROUP_COREDUMP.
      
      This will not affect the callers of signal_group_exit as zap_process
      also sets group_exit_task so signal_group_exit will continue to return
      true at the same times.
      
      This does not affect wait_task_zombie as the none of the threads
      wind up in EXIT_ZOMBIE state during a coredump.
      
      This does not affect oom_kill.c:__task_will_free_mem as
      sig->core_state is tested and handled before SIGNAL_GROUP_EXIT is
      tested for.
      
      This does not affect complete_signal as signal->core_state is tested
      for to ensure the coredump case is handled appropriately.
      
      Link: https://lkml.kernel.org/r/20211213225350.27481-4-ebiederm@xmission.com
      
      
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      752dc970
  14. Oct 08, 2021
    • Eric W. Biederman's avatar
      coredump: Limit coredumps to a single thread group · 0258b5fd
      Eric W. Biederman authored
      
      Today when a signal is delivered with a handler of SIG_DFL whose
      default behavior is to generate a core dump not only that process but
      every process that shares the mm is killed.
      
      In the case of vfork this looks like a real world problem.  Consider
      the following well defined sequence.
      
      	if (vfork() == 0) {
      		execve(...);
      		_exit(EXIT_FAILURE);
      	}
      
      If a signal that generates a core dump is received after vfork but
      before the execve changes the mm the process that called vfork will
      also be killed (as the mm is shared).
      
      Similarly if the execve fails after the point of no return the kernel
      delivers SIGSEGV which will kill both the exec'ing process and because
      the mm is shared the process that called vfork as well.
      
      As far as I can tell this behavior is a violation of people's
      reasonable expectations, POSIX, and is unnecessarily fragile when the
      system is low on memory.
      
      Solve this by making a userspace visible change to only kill a single
      process/thread group.  This is possible because Jann Horn recently
      modified[1] the coredump code so that the mm can safely be modified
      while the coredump is happening.  With LinuxThreads long gone I don't
      expect anyone to have a notice this behavior change in practice.
      
      To accomplish this move the core_state pointer from mm_struct to
      signal_struct, which allows different thread groups to coredump
      simultatenously.
      
      In zap_threads remove the work to kill anything except for the current
      thread group.
      
      v2: Remove core_state from the VM_BUG_ON_MM print to fix
          compile failure when CONFIG_DEBUG_VM is enabled.
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      
      [1] a07279c9 ("binfmt_elf, binfmt_elf_fdpic: use a VMA list snapshot")
      Fixes: d89f3847def4 ("[PATCH] thread-aware coredumps, 2.5.43-C3")
      History-tree: git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git
      Link: https://lkml.kernel.org/r/87y27mvnke.fsf@disp2133
      Link: https://lkml.kernel.org/r/20211007144701.67592574@canb.auug.org.au
      
      
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      0258b5fd
  15. Oct 06, 2021
    • Eric W. Biederman's avatar
      coredump: Don't perform any cleanups before dumping core · 92307383
      Eric W. Biederman authored
      Rename coredump_exit_mm to coredump_task_exit and call it from do_exit
      before PTRACE_EVENT_EXIT, and before any cleanup work for a task
      happens.  This ensures that an accurate copy of the process can be
      captured in the coredump as no cleanup for the process happens before
      the coredump completes.  This also ensures that PTRACE_EVENT_EXIT
      will not be visited by any thread until the coredump is complete.
      
      Add a new flag PF_POSTCOREDUMP so that tasks that have passed through
      coredump_task_exit can be recognized and ignored in zap_process.
      
      Now that all of the coredumping happens before exit_mm remove code to
      test for a coredump in progress from mm_release.
      
      Replace "may_ptrace_stop()" with a simple test of "current->ptrace".
      The other tests in may_ptrace_stop all concern avoiding stopping
      during a coredump.  These tests are no longer necessary as it is now
      guaranteed that fatal_signal_pending will be set if the code enters
      ptrace_stop during a coredump.  The code in ptrace_stop is guaranteed
      not to stop if fatal_signal_pending returns true.
      
      Until this change "ptrace_event(PTRACE_EVENT_EXIT)" could call
      ptrace_stop without fatal_signal_pending being true, as signals are
      dequeued in get_signal before calling do_exit.  This is no longer
      an issue as "ptrace_event(PTRACE_EVENT_EXIT)" is no longer reached
      until after the coredump completes.
      
      Link: https://lkml.kernel.org/r/874kaax26c.fsf@disp2133
      
      
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      92307383
    • Eric W. Biederman's avatar
      exit: Factor coredump_exit_mm out of exit_mm · d67e03e3
      Eric W. Biederman authored
      Separate the coredump logic from the ordinary exit_mm logic
      by moving the coredump logic out of exit_mm into it's own
      function coredump_exit_mm.
      
      Link: https://lkml.kernel.org/r/87a6k2x277.fsf@disp2133
      
      
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      d67e03e3
  16. Sep 08, 2021
  17. Jun 10, 2021
    • Eric W. Biederman's avatar
      coredump: Limit what can interrupt coredumps · 06af8679
      Eric W. Biederman authored
      
      Olivier Langlois has been struggling with coredumps being incompletely written in
      processes using io_uring.
      
      Olivier Langlois <olivier@trillion01.com> writes:
      > io_uring is a big user of task_work and any event that io_uring made a
      > task waiting for that occurs during the core dump generation will
      > generate a TIF_NOTIFY_SIGNAL.
      >
      > Here are the detailed steps of the problem:
      > 1. io_uring calls vfs_poll() to install a task to a file wait queue
      >    with io_async_wake() as the wakeup function cb from io_arm_poll_handler()
      > 2. wakeup function ends up calling task_work_add() with TWA_SIGNAL
      > 3. task_work_add() sets the TIF_NOTIFY_SIGNAL bit by calling
      >    set_notify_signal()
      
      The coredump code deliberately supports being interrupted by SIGKILL,
      and depends upon prepare_signal to filter out all other signals.   Now
      that signal_pending includes wake ups for TIF_NOTIFY_SIGNAL this hack
      in dump_emitted by the coredump code no longer works.
      
      Make the coredump code more robust by explicitly testing for all of
      the wakeup conditions the coredump code supports.  This prevents
      new wakeup conditions from breaking the coredump code, as well
      as fixing the current issue.
      
      The filesystem code that the coredump code uses already limits
      itself to only aborting on fatal_signal_pending.  So it should
      not develop surprising wake-up reasons either.
      
      v2: Don't remove the now unnecessary code in prepare_signal.
      
      Cc: stable@vger.kernel.org
      Fixes: 12db8b69 ("entry: Add support for TIF_NOTIFY_SIGNAL")
      Reported-by: default avatarOlivier Langlois <olivier@trillion01.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      06af8679
  18. Apr 07, 2021
  19. Mar 08, 2021
    • Al Viro's avatar
      coredump: don't bother with do_truncate() · d0f1088b
      Al Viro authored
      
      have dump_skip() just remember how much needs to be skipped,
      leave actual seeks/writing zeroes to the next dump_emit()
      or the end of coredump output, whichever comes first.
      And instead of playing with do_truncate() in the end, just
      write one NUL at the end of the last gap (if any).
      
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      d0f1088b
  20. Feb 26, 2021
  21. Jan 24, 2021
  22. Dec 10, 2020
  23. Dec 06, 2020
  24. Oct 16, 2020
  25. Aug 12, 2020
  26. Jun 09, 2020
  27. Apr 28, 2020
    • Luis Chamberlain's avatar
      coredump: fix crash when umh is disabled · 3740d93e
      Luis Chamberlain authored
      Commit 64e90a8a ("Introduce STATIC_USERMODEHELPER to mediate
      call_usermodehelper()") added the optiont to disable all
      call_usermodehelper() calls by setting STATIC_USERMODEHELPER_PATH to
      an empty string. When this is done, and crashdump is triggered, it
      will crash on null pointer dereference, since we make assumptions
      over what call_usermodehelper_exec() did.
      
      This has been reported by Sergey when one triggers a a coredump
      with the following configuration:
      
      ```
      CONFIG_STATIC_USERMODEHELPER=y
      CONFIG_STATIC_USERMODEHELPER_PATH=""
      kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
      ```
      
      The way disabling the umh was designed was that call_usermodehelper_exec()
      would just return early, without an error. But coredump assumes
      certain variables are set up for us when this happens, and calls
      ile_start_write(cprm.file) with a NULL file.
      
      [    2.819676] BUG: kernel NULL pointer dereference, address: 0000000000000020
      [    2.819859] #PF: supervisor read access in kernel mode
      [    2.820035] #PF: error_code(0x0000) - not-present page
      [    2.820188] PGD 0 P4D 0
      [    2.820305] Oops: 0000 [#1] SMP PTI
      [    2.820436] CPU: 2 PID: 89 Comm: a Not tainted 5.7.0-rc1+ #7
      [    2.820680] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31 04/01/2014
      [    2.821150] RIP: 0010:do_coredump+0xd80/0x1060
      [    2.821385] Code: e8 95 11 ed ff 48 c7 c6 cc a7 b4 81 48 8d bd 28 ff
      ff ff 89 c2 e8 70 f1 ff ff 41 89 c2 85 c0 0f 84 72 f7 ff ff e9 b4 fe ff
      ff <48> 8b 57 20 0f b7 02 66 25 00 f0 66 3d 00 8
      0 0f 84 9c 01 00 00 44
      [    2.822014] RSP: 0000:ffffc9000029bcb8 EFLAGS: 00010246
      [    2.822339] RAX: 0000000000000000 RBX: ffff88803f860000 RCX: 000000000000000a
      [    2.822746] RDX: 0000000000000009 RSI: 0000000000000282 RDI: 0000000000000000
      [    2.823141] RBP: ffffc9000029bde8 R08: 0000000000000000 R09: ffffc9000029bc00
      [    2.823508] R10: 0000000000000001 R11: ffff88803dec90be R12: ffffffff81c39da0
      [    2.823902] R13: ffff88803de84400 R14: 0000000000000000 R15: 0000000000000000
      [    2.824285] FS:  00007fee08183540(0000) GS:ffff88803e480000(0000) knlGS:0000000000000000
      [    2.824767] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [    2.825111] CR2: 0000000000000020 CR3: 000000003f856005 CR4: 0000000000060ea0
      [    2.825479] Call Trace:
      [    2.825790]  get_signal+0x11e/0x720
      [    2.826087]  do_signal+0x1d/0x670
      [    2.826361]  ? force_sig_info_to_task+0xc1/0xf0
      [    2.826691]  ? force_sig_fault+0x3c/0x40
      [    2.826996]  ? do_trap+0xc9/0x100
      [    2.827179]  exit_to_usermode_loop+0x49/0x90
      [    2.827359]  prepare_exit_to_usermode+0x77/0xb0
      [    2.827559]  ? invalid_op+0xa/0x30
      [    2.827747]  ret_from_intr+0x20/0x20
      [    2.827921] RIP: 0033:0x55e2c76d2129
      [    2.828107] Code: 2d ff ff ff e8 68 ff ff ff 5d c6 05 18 2f 00 00 01
      c3 0f 1f 80 00 00 00 00 c3 0f 1f 80 00 00 00 00 e9 7b ff ff ff 55 48 89
      e5 <0f> 0b b8 00 00 00 00 5d c3 66 2e 0f 1f 84 0
      0 00 00 00 00 0f 1f 40
      [    2.828603] RSP: 002b:00007fffeba5e080 EFLAGS: 00010246
      [    2.828801] RAX: 000055e2c76d2125 RBX: 0000000000000000 RCX: 00007fee0817c718
      [    2.829034] RDX: 00007fffeba5e188 RSI: 00007fffeba5e178 RDI: 0000000000000001
      [    2.829257] RBP: 00007fffeba5e080 R08: 0000000000000000 R09: 00007fee08193c00
      [    2.829482] R10: 0000000000000009 R11: 0000000000000000 R12: 000055e2c76d2040
      [    2.829727] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
      [    2.829964] CR2: 0000000000000020
      [    2.830149] ---[ end trace ceed83d8c68a1bf1 ]---
      ```
      
      Cc: <stable@vger.kernel.org> # v4.11+
      Fixes: 64e90a8a ("Introduce STATIC_USERMODEHELPER to mediate call_usermodehelper()")
      BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=199795
      
      
      Reported-by: default avatarTony Vroon <chainsaw@gentoo.org>
      Reported-by: default avatarSergey Kvachonok <ravenexp@gmail.com>
      Tested-by: default avatarSergei Trofimovich <slyfox@gentoo.org>
      Signed-off-by: default avatarLuis Chamberlain <mcgrof@kernel.org>
      Link: https://lore.kernel.org/r/20200416162859.26518-1-mcgrof@kernel.org
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3740d93e
Loading