Archive for the ‘signal’ Category

kernel: signal: how a fatal signal kills a thread group

December 26, 2015

This post discusses how a fatal signal kills a thread group.

reference code base
linux 4.3

call stack

group_send_sig_info()
-> do_send_sig_info()
   -> send_signal()
      -> __send_signal()
         -> sigaddset()
do_notify_resume()
-> do_signal()
   -> get_signal()
      -> dequeue_signal()
         -> __dequeue_signal()
            -> collect_signal()
               -> sigdelset()
      -> do_group_exit()
         -> zap_other_threads()
   -> handle_signal()

how to send a signal to kill a thread group
For example, oom-killer calls do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true) to kill a thread group. The signal number argument is SIGKILL which means the the target thread should exit immediately. The siginfo argument is SEND_SIG_FORCED which is a special siginfo to let __send_signal() process this signal with fast path. The task argument victim is the thread to kill. The group argument is true. It means the signal SIGKILL is sent to task->signal->pending_share which could be dequeued by any thread in this thread group.

No matter the group argument is true or not, all threads in the same thread group will received signal SIGKILL ultimately. If group argument is true, then any thread in the same thread group as victim thread could process the SIGKILL and call do_group_exit() to sends SIGKILL to all the other threads in the same thread group. If group argument is false, then only the victim thread could process the SIGKILL.

480 /*
481  * Must be called while holding a reference to p, which will be released upon
482  * returning.
483  */
484 void oom_kill_process(struct oom_control *oc, struct task_struct *p,
485                       unsigned int points, unsigned long totalpages,
486                       struct mem_cgroup *memcg, const char *message)
487 {
......
588         do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);
589         put_task_struct(victim);
590 }
591 #undef K
2522 /* These can be the second arg to send_sig_info/send_group_sig_info.  */
2523 #define SEND_SIG_NOINFO ((struct siginfo *) 0)
2524 #define SEND_SIG_PRIV   ((struct siginfo *) 1)
2525 #define SEND_SIG_FORCED ((struct siginfo *) 2)

how get_signal() calls do_group_exit() to exit the thread group
get_signal() calls dequeue_signal() to read pending signals from task->signal->pending. If no feasible signals are available, then it fallbacks to read pending signals from task->signal->share_pending. If a fatal signal such as SIGKILL signal is dequeued, then it will call do_group_exit(ksig->info.si_signo) which calls zap_other_threads() to send SIGKILL signal to all the other threads in the thread group.

zap_other_threads() sets sig->group_exit_code as exit_code and sets sig->flags as SIGNAL_GROUP_EXIT. It then calls sigaddset(&t->pending.signal, SIGKILL) to set SIGKILL in task->pending for all tasks in the thread group.

2176 int get_signal(struct ksignal *ksig)
2177 {
2178         struct sighand_struct *sighand = current->sighand;
2179         struct signal_struct *signal = current->signal;
2180         int signr;
2181 
2182         if (unlikely(current->task_works))
2183                 task_work_run();
2184 
2185         if (unlikely(uprobe_deny_signal()))
2186                 return 0;
2187 
2188         /*
2189          * Do this once, we can't return to user-mode if freezing() == T.
2190          * do_signal_stop() and ptrace_stop() do freezable_schedule() and
2191          * thus do not need another check after return.
2192          */
2193         try_to_freeze();
2194 
2195 relock:
2196         spin_lock_irq(&sighand->siglock);
2197         /*
2198          * Every stopped thread goes here after wakeup. Check to see if
2199          * we should notify the parent, prepare_signal(SIGCONT) encodes
2200          * the CLD_ si_code into SIGNAL_CLD_MASK bits.
2201          */
2202         if (unlikely(signal->flags & SIGNAL_CLD_MASK)) {
2203                 int why;
2204 
2205                 if (signal->flags & SIGNAL_CLD_CONTINUED)
2206                         why = CLD_CONTINUED;
2207                 else
2208                         why = CLD_STOPPED;
2209 
2210                 signal->flags &= ~SIGNAL_CLD_MASK;
2211 
2212                 spin_unlock_irq(&sighand->siglock);
2213 
2214                 /*
2215                  * Notify the parent that we're continuing.  This event is
2216                  * always per-process and doesn't make whole lot of sense
2217                  * for ptracers, who shouldn't consume the state via
2218                  * wait(2) either, but, for backward compatibility, notify
2219                  * the ptracer of the group leader too unless it's gonna be
2220                  * a duplicate.
2221                  */
2222                 read_lock(&tasklist_lock);
2223                 do_notify_parent_cldstop(current, false, why);
2224 
2225                 if (ptrace_reparented(current->group_leader))
2226                         do_notify_parent_cldstop(current->group_leader,
2227                                                 true, why);
2228                 read_unlock(&tasklist_lock);
2229 
2230                 goto relock;
2231         }
2232 
2233         for (;;) {
2234                 struct k_sigaction *ka;
2235 
2236                 if (unlikely(current->jobctl & JOBCTL_STOP_PENDING) &&
2237                     do_signal_stop(0))
2238                         goto relock;
2239 
2240                 if (unlikely(current->jobctl & JOBCTL_TRAP_MASK)) {
2241                         do_jobctl_trap();
2242                         spin_unlock_irq(&sighand->siglock);
2243                         goto relock;
2244                 }
2245 
2246                 signr = dequeue_signal(current, &current->blocked, &ksig->info);
2247 
2248                 if (!signr)
2249                         break; /* will return 0 */
2250 
2251                 if (unlikely(current->ptrace) && signr != SIGKILL) {
2252                         signr = ptrace_signal(signr, &ksig->info);
2253                         if (!signr)
2254                                 continue;
2255                 }
2256 
2257                 ka = &sighand->action[signr-1];
2258 
2259                 /* Trace actually delivered signals. */
2260                 trace_signal_deliver(signr, &ksig->info, ka);
2261 
2262                 if (ka->sa.sa_handler == SIG_IGN) /* Do nothing.  */
2263                         continue;
2264                 if (ka->sa.sa_handler != SIG_DFL) {
2265                         /* Run the handler.  */
2266                         ksig->ka = *ka;
2267 
2268                         if (ka->sa.sa_flags & SA_ONESHOT)
2269                                 ka->sa.sa_handler = SIG_DFL;
2270 
2271                         break; /* will return non-zero "signr" value */
2272                 }
2273 
2274                 /*
2275                  * Now we are doing the default action for this signal.
2276                  */
2277                 if (sig_kernel_ignore(signr)) /* Default is nothing. */
2278                         continue;
2279 
2280                 /*
2281                  * Global init gets no signals it doesn't want.
2282                  * Container-init gets no signals it doesn't want from same
2283                  * container.
2284                  *
2285                  * Note that if global/container-init sees a sig_kernel_only()
2286                  * signal here, the signal must have been generated internally
2287                  * or must have come from an ancestor namespace. In either
2288                  * case, the signal cannot be dropped.
2289                  */
2290                 if (unlikely(signal->flags & SIGNAL_UNKILLABLE) &&
2291                                 !sig_kernel_only(signr))
2292                         continue;
2293 
2294                 if (sig_kernel_stop(signr)) {
2295                         /*
2296                          * The default action is to stop all threads in
2297                          * the thread group.  The job control signals
2298                          * do nothing in an orphaned pgrp, but SIGSTOP
2299                          * always works.  Note that siglock needs to be
2300                          * dropped during the call to is_orphaned_pgrp()
2301                          * because of lock ordering with tasklist_lock.
2302                          * This allows an intervening SIGCONT to be posted.
2303                          * We need to check for that and bail out if necessary.
2304                          */
2305                         if (signr != SIGSTOP) {
2306                                 spin_unlock_irq(&sighand->siglock);
2307 
2308                                 /* signals can be posted during this window */
2309 
2310                                 if (is_current_pgrp_orphaned())
2311                                         goto relock;
2312 
2313                                 spin_lock_irq(&sighand->siglock);
2314                         }
2315 
2316                         if (likely(do_signal_stop(ksig->info.si_signo))) {
2317                                 /* It released the siglock.  */
2318                                 goto relock;
2319                         }
2320 
2321                         /*
2322                          * We didn't actually stop, due to a race
2323                          * with SIGCONT or something like that.
2324                          */
2325                         continue;
2326                 }
2327 
2328                 spin_unlock_irq(&sighand->siglock);
2329 
2330                 /*
2331                  * Anything else is fatal, maybe with a core dump.
2332                  */
2333                 current->flags |= PF_SIGNALED;
2334 
2335                 if (sig_kernel_coredump(signr)) {
2336                         if (print_fatal_signals)
2337                                 print_fatal_signal(ksig->info.si_signo);
2338                         proc_coredump_connector(current);
2339                         /*
2340                          * If it was able to dump core, this kills all
2341                          * other threads in the group and synchronizes with
2342                          * their demise.  If we lost the race with another
2343                          * thread getting here, it set group_exit_code
2344                          * first and our do_group_exit call below will use
2345                          * that value and ignore the one we pass it.
2346                          */
2347                         do_coredump(&ksig->info);
2348                 }
2349 
2350                 /*
2351                  * Death signals, no core dump.
2352                  */
2353                 do_group_exit(ksig->info.si_signo);
2354                 /* NOTREACHED */
2355         }
2356         spin_unlock_irq(&sighand->siglock);
2357 
2358         ksig->sig = signr;
2359         return ksig->sig > 0;
2360 }
846 /*
847  * Take down every thread in the group.  This is called by fatal signals
848  * as well as by sys_exit_group (below).
849  */
850 void
851 do_group_exit(int exit_code)
852 {
853         struct signal_struct *sig = current->signal;
854 
855         BUG_ON(exit_code & 0x80); /* core dumps don't get here */
856 
857         if (signal_group_exit(sig))
858                 exit_code = sig->group_exit_code;
859         else if (!thread_group_empty(current)) {
860                 struct sighand_struct *const sighand = current->sighand;
861 
862                 spin_lock_irq(&sighand->siglock);
863                 if (signal_group_exit(sig))
864                         /* Another thread got here before we took the lock.  */
865                         exit_code = sig->group_exit_code;
866                 else {
867                         sig->group_exit_code = exit_code;
868                         sig->flags = SIGNAL_GROUP_EXIT;
869                         zap_other_threads(current);
870                 }
871                 spin_unlock_irq(&sighand->siglock);
872         }
873 
874         do_exit(exit_code);
875         /* NOTREACHED */
876 }
1231 /*
1232  * Nuke all other threads in the group.
1233  */
1234 int zap_other_threads(struct task_struct *p)
1235 {
1236         struct task_struct *t = p;
1237         int count = 0;
1238 
1239         p->signal->group_stop_count = 0;
1240 
1241         while_each_thread(p, t) {
1242                 task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
1243                 count++;
1244 
1245                 /* Don't bother with already dead threads */
1246                 if (t->exit_state)
1247                         continue;
1248                 sigaddset(&t->pending.signal, SIGKILL);
1249                 signal_wake_up(t, 1);
1250         }
1251 
1252         return count;
1253 }

conclusion
This post discusses how a fatal signal kills a thread group. While a thread gets a fatal signal such as SIGKILL, it will call do_group_exit() to exit all threads in the same thread group. do_group_exit() accomplishes this by sending SIGKILL to all threads in the same thread group.

Advertisements

kernel: signal: __send_signal, and do_signal

December 26, 2015

This post discusses __send_signal() and do_signal().

reference code base
linux 4.3

call stack

group_send_sig_info()
-> do_send_sig_info()
   -> send_signal()
      -> __send_signal()
         -> sigaddset()
do_notify_resume()
-> do_signal()
   -> get_signal()
      -> dequeue_signal()
         -> __dequeue_signal()
            -> collect_signal()
               -> sigdelset()

how does __send_signal() set a signal pending
__send_signal() calls legacy_queue() to check if a signal is pending. If not, then the it calls sigaddset() to set this signal pending.

If group argument is true, then the pending is &t->signal->shared_pending. Otherwise, the pending is &t->pending.

1018 static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
1019                         int group, int from_ancestor_ns)
1020 {
1021         struct sigpending *pending;
1022         struct sigqueue *q;
1023         int override_rlimit;
1024         int ret = 0, result;
1025 
1026         assert_spin_locked(&t->sighand->siglock);
1027 
1028         result = TRACE_SIGNAL_IGNORED;
1029         if (!prepare_signal(sig, t,
1030                         from_ancestor_ns || (info == SEND_SIG_FORCED)))
1031                 goto ret;
1032 
1033         pending = group ? &t->signal->shared_pending : &t->pending;
1034         /*
1035          * Short-circuit ignored signals and support queuing
1036          * exactly one non-rt signal, so that we can get more
1037          * detailed information about the cause of the signal.
1038          */
1039         result = TRACE_SIGNAL_ALREADY_PENDING;
1040         if (legacy_queue(pending, sig))
1041                 goto ret;
1042 
1043         result = TRACE_SIGNAL_DELIVERED;
1044         /*
1045          * fast-pathed signals for kernel-internal things like SIGSTOP
1046          * or SIGKILL.
1047          */
1048         if (info == SEND_SIG_FORCED)
1049                 goto out_set;
1050 
1051         /*
1052          * Real-time signals must be queued if sent by sigqueue, or
1053          * some other real-time mechanism.  It is implementation
1054          * defined whether kill() does so.  We attempt to do so, on
1055          * the principle of least surprise, but since kill is not
1056          * allowed to fail with EAGAIN when low on memory we just
1057          * make sure at least one signal gets delivered and don't
1058          * pass on the info struct.
1059          */
1060         if (sig < SIGRTMIN)
1061                 override_rlimit = (is_si_special(info) || info->si_code >= 0);
1062         else
1063                 override_rlimit = 0;
1064 
1065         q = __sigqueue_alloc(sig, t, GFP_ATOMIC | __GFP_NOTRACK_FALSE_POSITIVE,
1066                 override_rlimit);
1067         if (q) {
1068                 list_add_tail(&q->list, &pending->list);
1069                 switch ((unsigned long) info) {
1070                 case (unsigned long) SEND_SIG_NOINFO:
1071                         q->info.si_signo = sig;
1072                         q->info.si_errno = 0;
1073                         q->info.si_code = SI_USER;
1074                         q->info.si_pid = task_tgid_nr_ns(current,
1075                                                         task_active_pid_ns(t));
1076                         q->info.si_uid = from_kuid_munged(current_user_ns(), current_uid());
1077                         break;
1078                 case (unsigned long) SEND_SIG_PRIV:
1079                         q->info.si_signo = sig;
1080                         q->info.si_errno = 0;
1081                         q->info.si_code = SI_KERNEL;
1082                         q->info.si_pid = 0;
1083                         q->info.si_uid = 0;
1084                         break;
1085                 default:
1086                         copy_siginfo(&q->info, info);
1087                         if (from_ancestor_ns)
1088                                 q->info.si_pid = 0;
1089                         break;
1090                 }
1091 
1092                 userns_fixup_signal_uid(&q->info, t);
1093 
1094         } else if (!is_si_special(info)) {
1095                 if (sig >= SIGRTMIN && info->si_code != SI_USER) {
1096                         /*
1097                          * Queue overflow, abort.  We may abort if the
1098                          * signal was rt and sent by user using something
1099                          * other than kill().
1100                          */
1101                         result = TRACE_SIGNAL_OVERFLOW_FAIL;
1102                         ret = -EAGAIN;
1103                         goto ret;
1104                 } else {
1105                         /*
1106                          * This is a silent loss of information.  We still
1107                          * send the signal, but the *info bits are lost.
1108                          */
1109                         result = TRACE_SIGNAL_LOSE_INFO;
1110                 }
1111         }
1112 
1113 out_set:
1114         signalfd_notify(t, sig);
1115         sigaddset(&pending->signal, sig);
1116         complete_signal(sig, t, group);
1117 ret:
1118         trace_signal_generate(sig, info, t, group, result);
1119         return ret;
1120 }

how does do_signal() clear a signal pending
do_signal() is executed() while a thread is returning from kernel space to user space and the thread info flag _TIF_SIGPENDING is set. It tries to get a pending signal to handle.

dequeue_signal() calls __dequeue_signal() to de-queue a signal from tsk->pending. If no pending signal found, then dequeue_signal() calls __dequeue_signal() again to de-queue a signal from tsk->signal->shared_pending. The share_pending is shared by the the thread’s thread group.

__dequeue_signal() calls next_signal() to find the first avail signal and calls collect_signal() to collect the signal.

collect_signal() calls sigdelset() to clear the pending signal.

599 /*
600  * Dequeue a signal and return the element to the caller, which is
601  * expected to free it.
602  *
603  * All callers have to hold the siglock.
604  */
605 int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
606 {
607         int signr;
608 
609         /* We only dequeue private signals from ourselves, we don't let
610          * signalfd steal them
611          */
612         signr = __dequeue_signal(&tsk->pending, mask, info);
613         if (!signr) {
614                 signr = __dequeue_signal(&tsk->signal->shared_pending,
615                                          mask, info);
616                 /*
617                  * itimer signal ?
618                  *
619                  * itimers are process shared and we restart periodic
620                  * itimers in the signal delivery path to prevent DoS
621                  * attacks in the high resolution timer case. This is
622                  * compliant with the old way of self-restarting
623                  * itimers, as the SIGALRM is a legacy signal and only
624                  * queued once. Changing the restart behaviour to
625                  * restart the timer in the signal dequeue path is
626                  * reducing the timer noise on heavy loaded !highres
627                  * systems too.
628                  */
629                 if (unlikely(signr == SIGALRM)) {
630                         struct hrtimer *tmr = &tsk->signal->real_timer;
631 
632                         if (!hrtimer_is_queued(tmr) &&
633                             tsk->signal->it_real_incr.tv64 != 0) {
634                                 hrtimer_forward(tmr, tmr->base->get_time(),
635                                                 tsk->signal->it_real_incr);
636                                 hrtimer_restart(tmr);
637                         }
638                 }
639         }
640 
641         recalc_sigpending();
642         if (!signr)
643                 return 0;
644 
645         if (unlikely(sig_kernel_stop(signr))) {
646                 /*
647                  * Set a marker that we have dequeued a stop signal.  Our
648                  * caller might release the siglock and then the pending
649                  * stop signal it is about to process is no longer in the
650                  * pending bitmasks, but must still be cleared by a SIGCONT
651                  * (and overruled by a SIGKILL).  So those cases clear this
652                  * shared flag after we've set it.  Note that this flag may
653                  * remain set after the signal we return is ignored or
654                  * handled.  That doesn't matter because its only purpose
655                  * is to alert stop-signal processing code when another
656                  * processor has come along and cleared the flag.
657                  */
658                 current->jobctl |= JOBCTL_STOP_DEQUEUED;
659         }
660         if ((info->si_code & __SI_MASK) == __SI_TIMER && info->si_sys_private) {
661                 /*
662                  * Release the siglock to ensure proper locking order
663                  * of timer locks outside of siglocks.  Note, we leave
664                  * irqs disabled here, since the posix-timers code is
665                  * about to disable them again anyway.
666                  */
667                 spin_unlock(&tsk->sighand->siglock);
668                 do_schedule_next_timer(info);
669                 spin_lock(&tsk->sighand->siglock);
670         }
671         return signr;
672 }
578 static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
579                         siginfo_t *info)
580 {
581         int sig = next_signal(pending, mask);
582 
583         if (sig) {
584                 if (current->notifier) {
585                         if (sigismember(current->notifier_mask, sig)) {
586                                 if (!(current->notifier)(current->notifier_data)) {
587                                         clear_thread_flag(TIF_SIGPENDING);
588                                         return 0;
589                                 }
590                         }
591                 }
592 
593                 collect_signal(sig, pending, info);
594         }
595 
596         return sig;
597 }
541 static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
542 {
543         struct sigqueue *q, *first = NULL;
544 
545         /*
546          * Collect the siginfo appropriate to this signal.  Check if
547          * there is another siginfo for the same signal.
548         */
549         list_for_each_entry(q, &list->list, list) {
550                 if (q->info.si_signo == sig) {
551                         if (first)
552                                 goto still_pending;
553                         first = q;
554                 }
555         }
556 
557         sigdelset(&list->signal, sig);
558 
559         if (first) {
560 still_pending:
561                 list_del_init(&first->list);
562                 copy_siginfo(info, &first->info);
563                 __sigqueue_free(first);
564         } else {
565                 /*
566                  * Ok, it wasn't in the queue.  This must be
567                  * a fast-pathed signal or we must have been
568                  * out of queue space.  So zero out the info.
569                  */
570                 info->si_signo = sig;
571                 info->si_errno = 0;
572                 info->si_code = SI_USER;
573                 info->si_pid = 0;
574                 info->si_uid = 0;
575         }
576 }
577 
578 static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
579                         siginfo_t *info)
580 {
581         int sig = next_signal(pending, mask);
582 
583         if (sig) {
584                 if (current->notifier) {
585                         if (sigismember(current->notifier_mask, sig)) {
586                                 if (!(current->notifier)(current->notifier_data)) {
587                                         clear_thread_flag(TIF_SIGPENDING);
588                                         return 0;
589                                 }
590                         }
591                 }
592 
593                 collect_signal(sig, pending, info);
594         }
595 
596         return sig;
597 }

conclusion
This post discusses __send_signal() and do_signal(). It shows how do_signal() sets a signal pending and how do_signal() clears a signal pending.

kernel: signal: how a signal is ignored if it’s already pending

December 26, 2015

This post discusses how a signal is ignored if it’s already pending.

reference code base
linux 4.3

call stack

group_send_sig_info()
-> do_send_sig_info()
   -> send_signal()
      -> __send_signal()

__send_signal
__send_signal() is the implementation of sending signal to a thread or a thread group. It calls legacy_queue() to check if the signal is legacy. If the result is true, then __send_signal() returns TRACE_SIGNAL_ALREADY_PENDING immediately.

struct sigpending has a member sigset_t signal to records all pending signals. If group argument is true, then __send_signal() uses the share pending of its thread group to check if the signal is legacy or not. Otherwise, it uses the task’s pending to check if the signal is legacy or not.

legacy_queue() returns true while the signal is not real time signal and is already pending. Thus, only standard signal could be ignored. A real time signal will be sent even it’s already pending.

1018 static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
1019                         int group, int from_ancestor_ns)
1020 {
1021         struct sigpending *pending;
1022         struct sigqueue *q;
1023         int override_rlimit;
1024         int ret = 0, result;
1025 
1026         assert_spin_locked(&t->sighand->siglock);
1027 
1028         result = TRACE_SIGNAL_IGNORED;
1029         if (!prepare_signal(sig, t,
1030                         from_ancestor_ns || (info == SEND_SIG_FORCED)))
1031                 goto ret;
1032 
1033         pending = group ? &t->signal->shared_pending : &t->pending;
1034         /*
1035          * Short-circuit ignored signals and support queuing
1036          * exactly one non-rt signal, so that we can get more
1037          * detailed information about the cause of the signal.
1038          */
1039         result = TRACE_SIGNAL_ALREADY_PENDING;
1040         if (legacy_queue(pending, sig))
1041                 goto ret;
1042 
1043         result = TRACE_SIGNAL_DELIVERED;
1044         /*
1045          * fast-pathed signals for kernel-internal things like SIGSTOP
1046          * or SIGKILL.
1047          */
1048         if (info == SEND_SIG_FORCED)
1049                 goto out_set;
1050 
1051         /*
1052          * Real-time signals must be queued if sent by sigqueue, or
1053          * some other real-time mechanism.  It is implementation
1054          * defined whether kill() does so.  We attempt to do so, on
1055          * the principle of least surprise, but since kill is not
1056          * allowed to fail with EAGAIN when low on memory we just
1057          * make sure at least one signal gets delivered and don't
1058          * pass on the info struct.
1059          */
1060         if (sig < SIGRTMIN)
1061                 override_rlimit = (is_si_special(info) || info->si_code >= 0);
1062         else
1063                 override_rlimit = 0;
1064 
1065         q = __sigqueue_alloc(sig, t, GFP_ATOMIC | __GFP_NOTRACK_FALSE_POSITIVE,
1066                 override_rlimit);
1067         if (q) {
1068                 list_add_tail(&q->list, &pending->list);
1069                 switch ((unsigned long) info) {
1070                 case (unsigned long) SEND_SIG_NOINFO:
1071                         q->info.si_signo = sig;
1072                         q->info.si_errno = 0;
1073                         q->info.si_code = SI_USER;
1074                         q->info.si_pid = task_tgid_nr_ns(current,
1075                                                         task_active_pid_ns(t));
1076                         q->info.si_uid = from_kuid_munged(current_user_ns(), current_uid());
1077                         break;
1078                 case (unsigned long) SEND_SIG_PRIV:
1079                         q->info.si_signo = sig;
1080                         q->info.si_errno = 0;
1081                         q->info.si_code = SI_KERNEL;
1082                         q->info.si_pid = 0;
1083                         q->info.si_uid = 0;
1084                         break;
1085                 default:
1086                         copy_siginfo(&q->info, info);
1087                         if (from_ancestor_ns)
1088                                 q->info.si_pid = 0;
1089                         break;
1090                 }
1091 
1092                 userns_fixup_signal_uid(&q->info, t);
1093 
1094         } else if (!is_si_special(info)) {
1095                 if (sig >= SIGRTMIN && info->si_code != SI_USER) {
1096                         /*
1097                          * Queue overflow, abort.  We may abort if the
1098                          * signal was rt and sent by user using something
1099                          * other than kill().
1100                          */
1101                         result = TRACE_SIGNAL_OVERFLOW_FAIL;
1102                         ret = -EAGAIN;
1103                         goto ret;
1104                 } else {
1105                         /*
1106                          * This is a silent loss of information.  We still
1107                          * send the signal, but the *info bits are lost.
1108                          */
1109                         result = TRACE_SIGNAL_LOSE_INFO;
1110                 }
1111         }
1112 
1113 out_set:
1114         signalfd_notify(t, sig);
1115         sigaddset(&pending->signal, sig);
1116         complete_signal(sig, t, group);
1117 ret:
1118         trace_signal_generate(sig, info, t, group, result);
1119         return ret;
1120 }
 26 struct sigpending {
 27         struct list_head list;
 28         sigset_t signal;
 29 };
992 static inline int legacy_queue(struct sigpending *signals, int sig)
993 {
994         return (sig < SIGRTMIN) && sigismember(&signals->signal, sig);
995 }
 58 static inline int sigismember(sigset_t *set, int _sig)
 59 {
 60         unsigned long sig = _sig - 1;
 61         if (_NSIG_WORDS == 1)
 62                 return 1 & (set->sig[0] >> sig);
 63         else
 64                 return 1 & (set->sig[sig / _NSIG_BPW] >> (sig % _NSIG_BPW));
 65 }

conclusion
This post discusses how a signal is ignored if it’s already pending. If a signal is not real time signal and is already pending, then __send_signal() will ignore it and return TRACE_SIGNAL_ALREADY_PENDING.


%d bloggers like this: