diff options
| author | Ingo Molnar <mingo@elte.hu> | 2002-11-05 04:25:29 -0800 |
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2002-11-05 04:25:29 -0800 |
| commit | d89f3847def4a55a84cc42809994bde2a148e9e0 (patch) | |
| tree | 44e264429c0e2f71167e77ee272e22c40d6acef8 /kernel/signal.c | |
| parent | 5a7728c6d3eb83df9d120944cca4cf476dd326a1 (diff) | |
| download | tip-d89f3847def4.tar.gz | |
[PATCH] thread-aware coredumps, 2.5.43-C3
Notice: this object is not reachable from any branch.
This is the second iteration of thread-aware coredumps.
Changes:
- Ulrich Drepper has reviewed the data structures and checked actual
coredumps via readelf - everything looks fine and according to the spec.
- a serious bug has been fixed in the thread-state dumping code - it was
still based on the 2.4 assumption that the task struct points to the
kernel stack - it's task->thread_info in 2.5. This bug caused bogus
register info to be filled in for threads.
- properly wait for all threads that share the same MM to serialize with
the coredumping thread. This is CLONE_VM based, not tied to
CLONE_THREAD and/or signal semantics, ie. old-style (or different-style)
threaded apps will be properly stopped as well.
The locking might look a bit complex, but i wanted to keep the
__exit_mm() overhead as low as possible. It's not quite trivial to get
these bits right, because 'sharing the MM' is detached from signals
semantics, so we cannot rely on broadcast-kill catching all threads. So
zap_threads() iterates through every thread and zaps those which were
left out. (There's a minimal race left in where a newly forked child
might escape the attention of zap_threads() - this race is fixed by the
OOM fixes in the mmap-speedup patch.)
- fill_psinfo() is now called with the thread group leader, for the
coredump to get 'process' state.
- initialize the elf_thread_status structure with zeroes.
the IA64 ELF bits are not included, yet, to reduce complexity of the
patch. The patch has been tested on x86 UP and SMP.
Notice: this object is not reachable from any branch.
Diffstat (limited to 'kernel/signal.c')
| -rw-r--r-- | kernel/signal.c | 18 |
1 files changed, 16 insertions, 2 deletions
diff --git a/kernel/signal.c b/kernel/signal.c index b037b12ce04ba8..2f2a5c233f6123 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -768,7 +768,7 @@ force_sig_info(int sig, struct siginfo *info, struct task_struct *t) } static int -specific_force_sig_info(int sig, struct task_struct *t) +__specific_force_sig_info(int sig, struct task_struct *t) { if (!t->sig) return -ESRCH; @@ -781,6 +781,20 @@ specific_force_sig_info(int sig, struct task_struct *t) return specific_send_sig_info(sig, (void *)2, t, 0); } +void +force_sig_specific(int sig, struct task_struct *t) +{ + unsigned long int flags; + + spin_lock_irqsave(&t->sig->siglock, flags); + if (t->sig->action[sig-1].sa.sa_handler == SIG_IGN) + t->sig->action[sig-1].sa.sa_handler = SIG_DFL; + sigdelset(&t->blocked, sig); + recalc_sigpending_tsk(t); + specific_send_sig_info(sig, (void *)2, t, 0); + spin_unlock_irqrestore(&t->sig->siglock, flags); +} + #define can_take_signal(p, sig) \ (((unsigned long) p->sig->action[sig-1].sa.sa_handler > 1) && \ !sigismember(&p->blocked, sig) && (task_curr(p) || !signal_pending(p))) @@ -846,7 +860,7 @@ int __broadcast_thread_group(struct task_struct *p, int sig) int err = 0; for_each_task_pid(p->tgid, PIDTYPE_TGID, tmp, l, pid) - err = specific_force_sig_info(sig, tmp); + err = __specific_force_sig_info(sig, tmp); return err; } |
