linux-nat.c: better starvation avoidance, handle non-stop mode too

Running the testsuite with a series that reimplements user-visible
all-stop behavior on top of a target running in non-stop mode revealed
problems related to event starvation avoidance.

For example, I see
gdb.threads/signal-while-stepping-over-bp-other-thread.exp failing.
What happens is that GDB core never gets to see the signal event.  It
ends up processing the events for the same threads over an over,
because Linux's waitpid(-1, ...) returns that first task in the task
list that has an event, starving threads on the tail of the task list.

So I wrote a non-stop mode test originally inspired by
signal-while-stepping-over-bp-other-thread.exp, to stress this
independently of all-stop on top of non-stop.  Fixing it required the
changes described below.  The test will be added in a following
commit.

1) linux-nat.c has code in place that picks an event LWP at random out
of all that have had events.  This is because on the kernel side,
"waitpid(-1, ...)"  just walks the task list linearly looking for the
first that had an event.  But, this code is currently only used in
all-stop mode.  So with a multi-threaded program that has multiple
events triggering debug events in parallel, GDB ends up starving some
threads.

To make the event randomization work in non-stop mode too, the patch
makes us pull out all the already pending events on the kernel side,
with waitpid, before deciding which LWP to report to the core.

There's some code in linux_wait that takes care of leaving events
pending if they were for LWPs the caller is not interested in.  The
patch moves that to linux_nat_filter_event, so that we only have one
place that leaves events pending.  With that in place, conceptually,
the flow is simpler and more normalized:

 #1 - walk the LWP list looking for an LWP with a pending event to report.
 #2 - if no pending event, pull events out of the kernel, and store
      them in the LWP structures as pending.
 #3- goto #1.

2) Then, currently the event randomization code only considers SIGTRAP
(or trap-like) events.  That means that if e.g., have have multiple
threads stepping in parallel that hit a breakpoint that needs stepping
over, and one gets a signal, the signal may end up never getting
processed, because GDB will always be giving priority to the SIGTRAPs.
The patch fixes this by making the randomization code consider all
kinds of pending events.

3) If multiple threads hit a breakpoint, we report one of those, and
"cancel" the others.  Cancelling means decrementing the PC, and
discarding the event.  If the next time the LWP is resumed the
breakpoint is still installed, the LWP should hit it again, and we'll
report the hit then.  The problem I found is that this delays threads
from advancing too much, with the kernel potentially ending up
scheduling the same threads over and over, and others not advancing.
So the patch switches away from cancelling the breakpoints, and
instead remembering that the LWP had stopped for a breakpoint.  If on
resume the breakpoint is still installed, we report it.  If it's no
longer installed, we discard the pending event then.  This is actually
how GDBserver used to handle this before d50171e4 (Teach linux
gdbserver to step-over-breakpoints), but with the difference that back
then we'd delay adjusting the PC until resuming, which made it so that
"info threads" could wrongly see threads with unadjusted PCs.

gdb/
2015-01-09  Pedro Alves  <palves@redhat.com>

	* breakpoint.c (hardware_breakpoint_inserted_here_p): New
	function.
	* breakpoint.h (hardware_breakpoint_inserted_here_p): New
	declaration.
	* linux-nat.c (linux_nat_status_is_event): Move higher up in file.
	(linux_resume_one_lwp): Store the thread's PC.  Adjust to clear
	stop_reason.
	(check_stopped_by_watchpoint): New function.
	(save_sigtrap): Reimplement.
	(linux_nat_stopped_by_watchpoint): Adjust.
	(linux_nat_lp_status_is_event): Delete.
	(stop_wait_callback): Only call save_sigtrap after storing the
	pending status.
	(status_callback): If the thread had been stopped for a breakpoint
	that has since been removed, discard the event and resume the LWP.
	(count_events_callback, select_event_lwp_callback): Use
	lwp_status_pending_p instead of linux_nat_lp_status_is_event.
	(cancel_breakpoint): Rename to ...
	(check_stopped_by_breakpoint): ... this.  Record whether the LWP
	stopped for a software breakpoint or hardware breakpoint.
	(select_event_lwp): Only give preference to the stepping LWP in
	all-stop mode.  Adjust comments.
	(stop_and_resume_callback): Remove references to new_pending_p.
	(linux_nat_filter_event): Likewise.  Leave exit events of the
	leader thread pending here.  Handle signal short circuiting here.
	Only call save_sigtrap after storing the pending waitstatus.
	(linux_nat_wait_1): Remove 'retry' label.  Remove references to
	new_pending.  Don't handle leaving events the caller is not
	interested in pending here, nor handle signal short-circuiting
	here.  Also give equal priority to all LWPs that have had events
	in non-stop mode.  If reporting a software breakpoint event,
	unadjust the LWP's PC.
	* linux-nat.h (enum lwp_stop_reason): New.
	(struct lwp_info) <stop_pc>: New field.
	(struct lwp_info) <stopped_by_watchpoint>: Delete field.
	(struct lwp_info) <stop_reason>: New field.
	* x86-linux-nat.c (x86_linux_prepare_to_resume): Adjust.
This commit is contained in:
Pedro Alves
2015-01-07 12:48:32 +00:00
parent 8af756ef81
commit 9c02b52532
6 changed files with 392 additions and 303 deletions

View File

@ -4293,6 +4293,29 @@ software_breakpoint_inserted_here_p (struct address_space *aspace,
return 0;
}
/* See breakpoint.h. */
int
hardware_breakpoint_inserted_here_p (struct address_space *aspace,
CORE_ADDR pc)
{
struct bp_location **blp, **blp_tmp = NULL;
struct bp_location *bl;
ALL_BP_LOCATIONS_AT_ADDR (blp, blp_tmp, pc)
{
struct bp_location *bl = *blp;
if (bl->loc_type != bp_loc_hardware_breakpoint)
continue;
if (bp_location_inserted_here_p (bl, aspace, pc))
return 1;
}
return 0;
}
int
hardware_watchpoint_inserted_in_range (struct address_space *aspace,
CORE_ADDR addr, ULONGEST len)