Having a useful __repr__ method can make debugging Python code that
little bit easier. This commit adds __repr__ for gdb.PendingFrame and
gdb.UnwindInfo classes, along with some tests.
Reviewed-By: Tom Tromey <tom@tromey.com>
The gdb.Frame class has far more methods than gdb.PendingFrame. Given
that a PendingFrame hasn't yet been claimed by an unwinder, there is a
limit to which methods we can add to it, but many of the methods that
the Frame class has, the PendingFrame class could also support.
In this commit I've added those methods to PendingFrame that I believe
are safe.
In terms of implementation: if I was starting from scratch then I
would implement many of these (or most of these) as attributes rather
than methods. However, given both Frame and PendingFrame are just
different representation of a frame, I think there is value in keeping
the interface for the two classes the same. For this reason
everything here is a method -- that's what the Frame class does.
The new methods I've added are:
- gdb.PendingFrame.is_valid: Return True if the pending frame
object is valid.
- gdb.PendingFrame.name: Return the name for the frame's function,
or None.
- gdb.PendingFrame.pc: Return the $pc register value for this
frame.
- gdb.PendingFrame.language: Return a string containing the
language for this frame, or None.
- gdb.PendingFrame.find_sal: Return a gdb.Symtab_and_line object
for the current location within the pending frame, or None.
- gdb.PendingFrame.block: Return a gdb.Block for the current
pending frame, or None.
- gdb.PendingFrame.function: Return a gdb.Symbol for the current
pending frame, or None.
In every case I've just copied the implementation over from gdb.Frame
and cleaned the code slightly e.g. NULL to nullptr. Additionally each
function required a small update to reflect the PendingFrame type, but
that's pretty minor.
There are tests for all the new methods.
For more extensive testing, I added the following code to the file
gdb/python/lib/command/unwinders.py:
from gdb.unwinder import Unwinder
class TestUnwinder(Unwinder):
def __init__(self):
super().__init__("XXX_TestUnwinder_XXX")
def __call__(self,pending_frame):
lang = pending_frame.language()
try:
block = pending_frame.block()
assert isinstance(block, gdb.Block)
except RuntimeError as rte:
assert str(rte) == "Cannot locate block for frame."
function = pending_frame.function()
arch = pending_frame.architecture()
assert arch is None or isinstance(arch, gdb.Architecture)
name = pending_frame.name()
assert name is None or isinstance(name, str)
valid = pending_frame.is_valid()
pc = pending_frame.pc()
sal = pending_frame.find_sal()
assert sal is None or isinstance(sal, gdb.Symtab_and_line)
return None
gdb.unwinder.register_unwinder(None, TestUnwinder())
This registers a global unwinder that calls each of the new
PendingFrame methods and checks the result is of an acceptable type.
The unwinder never claims any frames though, so shouldn't change how
GDB actually behaves.
I then ran the testsuite. There was only a single regression, a test
that uses 'disable unwinder' and expects a single unwinder to be
disabled -- the extra unwinder is now disabled too, which changes the
test output. So I'm reasonably confident that the new methods are not
going to crash GDB.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit copies the pattern that is present in many other py-*.c
files: having a single macro to check that the Python object is still
valid.
This cleans up the code a little throughout the py-unwind.c file.
Some of the exception messages will change slightly with this commit,
though the type of the exceptions is still ValueError in all cases.
I started writing some tests for this change and immediately ran into
a problem: GDB would crash. It turns out that the PendingFrame
objects are not being marked as invalid!
In pyuw_sniffer where the pending frames are created, we make use of a
scoped_restore to invalidate the pending frame objects. However, this
only restores the pending_frame_object::frame_info field to its
previous value -- and it turns out we never actually give this field
an initial value, it's left undefined.
So, when the scoped_restore (called invalidate_frame) performs its
cleanup, it actually restores the frame_info field to an undefined
value. If this undefined value is not nullptr then any future
accesses to the PendingFrame object result in undefined behaviour and
most likely, a crash.
As part of this commit I now initialize the frame_info field, which
ensures all the new tests now pass.
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit makes a few related changes to the gdb.unwinder.Unwinder
class attributes:
1. The 'name' attribute is now a read-only attribute. This prevents
user code from changing the name after registering the unwinder. It
seems very unlikely that any user is actually trying to do this in
the wild, so I'm not very worried that this will upset anyone,
2. We now validate that the name is a string in the
Unwinder.__init__ method, and throw an error if this is not the
case. Hopefully nobody was doing this in the wild. This should
make it easier to ensure the 'info unwinder' command shows sane
output (how to display a non-string name for an unwinder?),
3. The 'enabled' attribute is now implemented with a getter and
setter. In the setter we ensure that the new value is a boolean,
but the real important change is that we call
'gdb.invalidate_cached_frames()'. This means that the backtrace
will be updated if a user manually disables an unwinder (rather than
calling the 'disable unwinder' command). It is not unreasonable to
think that a user might register multiple unwinders (relating to
some project) and have one command that disables/enables all the
related unwinders. This command might operate by poking the enabled
attribute of each unwinder object directly, after this commit, this
would now work correctly.
There's tests for all the changes, and lots of documentation updates
that both cover the new changes, but also further improve (I think)
the general documentation for GDB's Unwinder API.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
PR mi/11335 points out that an MI varobj will not display the result
of a pretty-printer's "to_string" method. Instead, it always shows
"{...}".
This does not seem very useful, and there have been multiple
complaints about it over the years. This patch changes varobj to emit
this string when possible, and updates the test suite.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=11335
A user here at AdaCore noticed that, when debugging a certain program,
a stack frame reported line 34358, where it should have been line
99894.
After debugging a bit, I discovered:
(top) p (99894 & ~65536)
$60 = 34358
That line, symbol::line is too narrow.
This patch widens the member and changes all the uses that currently
use the narrower type.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
When running gdb.python/tui-window.exp with host board
local-remote-host-notty and target board native-gdbserver, I get:
...
FAIL: gdb.python/tui-window-factory.exp: msg_2: \
check test_window box (box check: ul corner is l, not +)
...
The problem is that the result of Term::prepare_for_tui is not checked.
Fix this by adding the missing check.
Tested on x86_64-linux.
When running gdb.python/tui-window.exp with host board
local-remote-host-notty and target board native-gdbserver, I get:
...
UNSUPPORTED: gdb.python/tui-window.exp: TUI not supported
FAIL: gdb.python/tui-window.exp: test title
...
Fix this by adding the missing return after the unsupported.
Tested on x86_64-linux.
PR python/17136 reported an unhandled exception when using typeprinters
only valid on some objfiles, rather than being a global typeprinter. The
fix was accepted without a regression test, and we've been carrying one
out-of-tree for a while but I think it's worth upstreaming. The code
itself was developed by Jan Kratochvil.
Co-Authored-By: Jan Kratochvil <jkratochvil@azul.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=17136
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Approved-By: Tom Tromey <tom@tromey.com>
With test-case gdb.python/py-completion.exp and target board
native-extended-gdbserver I get this warning:
...
(gdb) PASS: gdb.python/py-completion.exp: discard #2
completefilecommandcond $outputs/gdb.python/py-completion/py-completion-t^G\
PASS: gdb.python/py-completion.exp: completefilecommandcond completion
Remote debugging from host ::1, port 53346^M
monitor exit^M
not implemented^M
(gdb) WARNING: Timed out waiting for EOF in server after monitor exit
...
Fix this by adding the missing "discard #3", such that we have instead:
...
(gdb) PASS: gdb.python/py-completion.exp: discard #2
completefilecommandcond $outputs/gdb.python/py-completion/py-completion-t^G\
PASS: gdb.python/py-completion.exp: completefilecommandcond completion
^M
not implemented^M
(gdb) PASS: gdb.python/py-completion.exp: discard #3
Remote debugging from host ::1, port 36278^M
monitor exit^M
(gdb)
...
Tested on x86_64-linux.
[ Using $pp as shorthand for the pagination prompt
"--Type <RET> for more, q to quit, c to continue without paging--". ]
The test-case gdb.python/py-cmd.exp passes, but the handling of the
test_multiline command output looks a bit odd:
...
(gdb) test_multiline
test_multiline output
...
test_multiline output
$ppPASS: gdb.python/py-cmd.exp: verify pagination from test_multiline
q
test_multiline
Quit
(gdb) test_multiline
test_multiline output
...
test_multiline output
$ppPASS: gdb.python/py-cmd.exp: verify pagination from test_multiline: q
...
What happens is:
- a test_multiline command is issued
- some output is printed, followed by a pagination prompt
- the test-case concludes that pagination occurred, and produces a PASS
- "q\n" is replied to the pagination prompt
- without waiting for response to the "q\n", another test_multiline command is
issued
- in response to the "q\n" we get "Quit\n(gdb) "
- some output is printed, followed by a pagination prompt
- the test-case concludes that there's a valid response to the "q\n", and
produces a PASS, consuming the second pagination prompt, but without a reply.
My conclusion is that the second test_multiline command is unintentional, so fix
this by removing it.
Without it, we have the more straightforward:
...
(gdb) test_multiline
test_multiline output
...
test_multiline output
$ppPASS: gdb.python/py-cmd.exp: verify pagination from test_multiline
q
Quit
(gdb) PASS: gdb.python/py-cmd.exp: verify pagination from test_multiline: q
...
This also fixes the following warning with target board native-gdbserver:
...
WARNING: Timed out waiting for EOF in server after monitor exit
...
Tested on x86_64-linux.
With test-case gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.exp
and target board remote-gdbserver-on-localhost, I run into:
...
FAIL: $exp: runto: run to main
...
I can easily fix this using "gdb_load_shlib $binfile_lib", but then run into:
...
(gdb) print all_good^M
$1 = false^M
(gdb) FAIL: $exp: print all_good
info pretty-printer^M
...
Sysroot is set to "target:", so gdb downloads the shared library from the target
(Using $so as shorthand for
libpy-autoloaded-pretty-printers-in-newobjfile-event.so):
...
Reading /home/remote-target/$so from remote target...^M
...
and internally refers to it as "target:/home/remote-target/$so".
In load_auto_scripts_for_objfile, gdb gives up trying to auto-load scripts
for $so once it checks for is_target_filename.
Fix this by declaring auto-load unsupported if sysroot starts with "target:".
Tested on x86_64-linux.
I ran into:
...
(gdb) PASS: gdb.python/py-record-btrace.exp: function call: \
python print(c.prev)
python print(c == c.next.prev)^M
Traceback (most recent call last):^M
File "<string>", line 1, in <module>^M
AttributeError: 'NoneType' object has no attribute 'prev'^M
Error while executing Python code.^M
(gdb) FAIL: gdb.python/py-record-btrace.exp: function call: \
python print(c == c.next.prev)
...
due to having only 4 insn instead of 100:
...
python print(len(insn))^M
4^M
...
This could be caused by the same hw bug as we already have an xfail for, so
expand the xfail matching.
Tested on x86_64-linux.
PR testsuite/30185
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30185
Approved-By: Markus T. Metzger <markus.t.metzger@intel.com>
On powerpc64le-linux, I run into two timeouts:
...
FAIL: gdb.python/py-breakpoint.exp: test_watchpoints: \
Test watchpoint write (timeout)
FAIL: gdb.python/py-breakpoint.exp: test_bkpt_internal: \
Test watchpoint write (timeout)
...
In this case, hw watchpoints are not supported, and using sw watchpoints
is slow.
Most of the time is spent in handling a try-catch, which triggers a malloc. I
think this bit is more relevant for the "catch throw" part of the test-case,
so fix the timeouts by setting the watchpoints after the try-catch.
Tested on x86_64-linux and powerpc64le-linux.
Hannes filed a bug showing a crash, where a pretty-printer written in
Python could cause a use-after-free. He sent a patch, but I thought a
different approach was needed.
In a much earlier patch (see bug #12533), we changed the Python code
to release new values from the value chain when constructing a
gdb.Value. The rationale for this is that if you write a command that
does a lot of computations in a loop, all the values will be kept live
by the value chain, resulting in gdb using a large amount of memory.
However, suppose a value is passed to Python from some code in gdb
that needs to use the value after the call into Python. In this
scenario, value_to_value_object will still release the value -- and
because gdb code doesn't generally keep strong references to values (a
consequence of the ancient decision to use the value chain to avoid
memory management), this will result in a use-after-free.
This scenario can happen, as it turns out, when a value is passed to
Python for pretty-printing. Now, normally this route boxes the value
via value_to_value_object_no_release, avoiding the problematic release
from the value chain. However, if you then call Value.cast, the
underlying value API might return the same value, when is then
released from the chain.
This patch fixes the problem by changing how value boxing is done.
value_to_value_object no longer removes a value from the chain.
Instead, every spot in gdb that might construct new values uses a
scoped_value_mark to ensure that the requirements of bug #12533 are
met. And, because incoming values aren't ever released from the chain
(the Value.cast one comes earlier on the chain than the
scoped_value_mark), the bug can no longer occur. (Note that many
spots in the Python layer already take this approach, so not many
places needed to be touched.)
In the future I think we should replace the use of raw "value *" with
value_ref_ptr pretty much everywhere. This will ensure lifetime
safety throughout gdb.
The test case in this patch comes from Hannes' original patch. I only
made a trivial ("require") change to it. However, while this fails
for him, I can't make it fail on this machine; nevertheless, he tried
my patch and reported the bug as being fixed.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30044
I came across:
...
gdb) PASS: gdb.python/py-record-btrace.exp: prepare record: stepi 100
python insn = r.instruction_history^M
warning: Non-contiguous trace at instruction 1 (offset = 0x3e10).^M
(gdb) FAIL: gdb.python/py-record-btrace.exp: prepare record: python insn = r.i\
nstruction_history
...
I'm assuming it's the same root cause as for the already present XFAIL.
Fix this by recognizing above warning in the xfail regexp.
Tested on x86_64-linux, although sofar I was not able to trigger the warning
again.
Approved-By: Markus T. Metzger <markus.t.metzger@intel.com>
Currently, looking at the type of an internal function, like below,
hits an odd error:
(gdb) ptype $_isvoid
type = <internal function>type not handled in c_type_print_varspec_prefix()
That is an error thrown from
c-typeprint.c:c_type_print_varspec_prefix, where it reads:
...
case TYPE_CODE_DECFLOAT:
case TYPE_CODE_FIXED_POINT:
/* These types need no prefix. They are listed here so that
gcc -Wall will reveal any types that haven't been handled. */
break;
default:
error (_("type not handled in c_type_print_varspec_prefix()"));
break;
Internal function types have type code TYPE_CODE_INTERNAL_FUNCTION,
which is not explicitly handled by that switch.
That comment quoted above says that gcc -Wall will reveal any types
that haven't been handled, but that's not actually true, at least with
modern GCCs. You would need to enable -Wswitch-enum for that, which
we don't. If I do enable that warning, then I see that we're missing
handling for the following type codes:
TYPE_CODE_INTERNAL_FUNCTION,
TYPE_CODE_MODULE,
TYPE_CODE_NAMELIST,
TYPE_CODE_XMETHOD
TYPE_CODE_MODULE and TYPE_CODE_NAMELIST and Fortran-specific, so it'd
be a little weird to handle them here.
I tried to reach this code with TYPE_CODE_XMETHOD, but couldn't figure
out how to. ptype on an xmethod isn't treated specially, it just
complains that the method doesn't exist. I've extended the
gdb.python/py-xmethods.exp testcase to make sure of that.
My thinking is that whatever type code we add next, the most likely
scenario is that it won't need any special handling, so we'd just be
adding another case to that "do nothing" list. If we do need special
casing for whatever type code, I think that tests added at the same
time as the feature would uncover it anyhow. If we do miss adding the
special casing, then it still looks better to me to print the type
somewhat incompletely than to error out and make it harder for users
to debug whatever they need. So I think that the best thing to do
here is to just remove all those explicit "do nothing" cases, along
with the error default case.
After doing that, I decided to write a testcase that iterates over all
supported languages doing "ptype INTERNAL_FUNC". That revealed that
Pascal has a similar problem, except the default case hits a
gdb_assert instead of an error:
(gdb) with language pascal -- ptype $_isvoid
type =
../../src/gdb/p-typeprint.c:268: internal-error: type_print_varspec_prefix: unexpected type
A problem internal to GDB has been detected,
further debugging may prove unreliable.
That is fixed by this patch in the same way.
You'll notice that the new testcase special-cases the Ada expected
output:
} elseif {$lang == "ada"} {
gdb_test "ptype \$_isvoid" "<<internal function>>"
} else {
gdb_test "ptype \$_isvoid" "<internal function>"
}
That will be subject of the following patch.
Approved-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I81aec03523cceb338b5180a0b4c2e4ad26b4c4db
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30105
There's a HW bug affecting Processor Trace on some Intel processors
(Ice Lake to Raptor Lake microarchitectures).
The bug was exposed by linux kernel commit 670638477aed
("perf/x86/intel/pt: Opportunistically use single range output mode"),
added in version v5.5.0, and was worked around by commit ce0d998be927
("perf/x86/intel/pt: Fix sampling using single range output") in version
6.1.0.
The bug manifests (on a Performance-core of an i7-1250U, an Alder Lake cpu) in
a single test-case:
...
(gdb) python insn = r.instruction_history^M
warning: Decode error (-20) at instruction 33 (offset = 0x3d6a, \
pc = 0x400501): compressed return without call.^M
(gdb) FAIL: gdb.python/py-record-btrace.exp: prepare record: \
python insn = r.instruction_history
...
Add a corresponding XFAIL.
Note that the i7-1250U has both Performance-cores and Efficient-cores, and on
an Efficient-Core the test-case runs without any problems, so if the testsuite
run is not pinned to a specific cpu, the test may either PASS or XFAIL.
Tested on x86_64-linux:
- openSUSE Leap 15.4 with linux kernel version 5.14.21
- openSUSE Tumbleweed with linux kernel version 6.1.8
PR testsuite/30075
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30075
The previous commit relied on spotting when a Python defined TUI
window factory was deleted. I spotted that the window factories are
not deleted when GDB shuts down its Python environment, they are only
deleted when one window factory replaces another. Consider this
example Python script:
class TestWindowFactory:
def __init__(self, msg):
self.msg = msg
print("Entering TestWindowFactory.__init__: %s" % self.msg)
def __call__(self, tui_win):
print("Entering TestWindowFactory.__call__: %s" % self.msg)
return TestWindow(tui_win, self.msg)
def __del__(self):
print("Entering TestWindowFactory.__del__: %s" % self.msg)
gdb.register_window_type("test_window", TestWindowFactory("A"))
gdb.register_window_type("test_window", TestWindowFactory("B"))
And this GDB session:
(gdb) source tui.py
Entering TestWindowFactory.__init__: A
Entering TestWindowFactory.__init__: B
Entering TestWindowFactory.__del__: B
(gdb) quit
Notice that when the 'B' window replaces the 'A' window we see the 'A'
object being deleted. But, when Python is shut down (after the
'quit') the 'B' object is never deleted.
Instead, GDB retains a reference to the window factory object, which
forces the Python object to remain live even after the Python
interpreter itself has been shut down.
The references themselves are held in a dynamically allocated
std::unordered_map (in tui/tui-layout.c) which is never deallocated,
thus the underlying Python references are never decremented to zero,
and so GDB never tries to delete these Python objects.
This commit is the first half of the work to clean up this edge case.
All gdbpy_tui_window_maker objects (the objects that implement the
TUI window factory callback for Python defined TUI windows), are now
linked together into a global list using the intrusive list mechanism.
When GDB shuts down the Python interpreter we can now walk this global
list and release the reference that is held to the underlying Python
object. By releasing this reference the Python object will now be
deleted.
I've added a new assert in gdbpy_tui_window_maker::operator(), this
will catch the case where we somehow end up in here after having
reset the reference to the underlying Python object. I don't think
this should ever happen though as we only clear the references when
shutting down the Python interpreter, and the ::operator() function is
only called when trying to apply a new TUI layout - something that
shouldn't happen while GDB itself is shutting down.
This commit does not update the std::unordered_map in tui-layout.c,
that will be done in the next commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
The documentation for gdb.register_window_type says:
"... It's an error to try to replace one of the built-in windows,
but other window types can be replaced. ..."
I take this to mean that if I imported a Python script like this:
gdb.register_window_type('my_window', FactoryFunction)
Then GDB would have a new TUI window 'my_window', which could be
created by calling FactoryFunction(). If I then, in the same GDB
session imported a script which included:
gdb.register_window_type('my_window', UpdatedFactoryFunction)
Then GDB would replace the old 'my_window' factory with my new one,
GDB would now call UpdatedFactoryFunction().
This is pretty useful in practice, as it allows users to iterate on
their window implementation within a single GDB session.
However, right now, this is not how GDB operates. The second call to
register_window_type is basically ignored and the old window factory
is retained.
This is because in tui_register_window (tui/tui-layout.c) we use
std::unordered_map::emplace to insert the new factory function, and
emplace doesn't replace an existing element in an unordered_map.
In this commit, before the emplace call, I now search for an already
existing element, and delete any matching element from the map, the
emplace call will then add the new factory function.
Reviewed-By: Tom Tromey <tom@tromey.com>
We have an inconsistency in value history accesses where array element
accesses cause an error for entries exceeding the currently selected
`max-value-size' setting even where such accesses successfully complete
for elements located in the inferior, e.g.:
(gdb) p/d one
$1 = 0
(gdb) p/d one_hundred
$2 = {0 <repeats 100 times>}
(gdb) p/d one_hundred[99]
$3 = 0
(gdb) set max-value-size 25
(gdb) p/d one_hundred
value requires 100 bytes, which is more than max-value-size
(gdb) p/d one_hundred[99]
$7 = 0
(gdb) p/d $2
value requires 100 bytes, which is more than max-value-size
(gdb) p/d $2[99]
value requires 100 bytes, which is more than max-value-size
(gdb)
According to our documentation the `max-value-size' setting is a safety
guard against allocating an overly large amount of memory. Moreover a
statement in documentation says, concerning this setting, that: "Setting
this variable does not affect values that have already been allocated
within GDB, only future allocations." While in the implementer-speak
the sentence may be unambiguous I think the outside user may well infer
that the setting does not apply to values previously printed.
Therefore rather than just fixing this inconsistency it seems reasonable
to lift the setting for value history accesses, under an implication
that by having been retrieved from the debuggee they have already passed
the safety check. Do it then, by suppressing the value size check in
`value_copy' -- under an observation that if the original value has been
already loaded (i.e. it's not lazy), then it must have previously passed
said check -- making the last two commands succeed:
(gdb) p/d $2
$8 = {0 <repeats 100 times>}
(gdb) p/d $2 [99]
$9 = 0
(gdb)
Expand the testsuite accordingly, covering both value history handling
and the use of `value_copy' by `make_cv_value', used by Python code.
The command has no effect for the loading of GDB pretty printers and is
removed by this patch to avoid confusion.
Documentation for "set print pretty"
"Cause GDB to print structures in an indented format with one member per line"
This changes a number of MI tests to use mi_clean_restart rather than
separate calls. This reduces the number of lines, which is nice, and
also provides a nicer model to copy for future tests.
This changes some tests to use require with 'is_remote', rather than
an explicit test. This adds uniformity helps clean up more spots
where a test might exit early without any notification.
The test suite reports several path names in tests. I couldn't find
most of these, and I suspect they are false reports, but I did manage
to locate one. This one is probably harmless, as I think the path
does not vary; but it's also easy to fix and suppress one warning.
Modify test cases that verify the operation of the array element limit
with character strings such that they are executed twice, once with the
`set print characters' option set to `elements' and the limit controlled
with the `set print elements' option, and then again with the limit
controlled with the `set print characters' option instead. Similarly
with the `-elements' and `-characters' options for the `print' command.
Additionally verify that said `print' command options combined yield the
expected result.
Verify correct $_gdb_setting and $_gdb_setting_str values for the `print
characters' setting, in particular the `void' value for the `elements'
default, which has no corresponding integer value exposed.
Add Guile and Python coverage for the `print characters' GDB setting.
There are new tests for Ada and Pascal, as the string printing code for
these languages is different than the generic string printing code used
by other languages. Modula2 also has different string printing code,
but (a) this is similar to Pascal, and (b) there are no existing modula2
tests written in Modula2, so I'm not sure how I'd even test the Modula2
string printing.
Co-Authored-By: Maciej W. Rozycki <macro@embecosm.com>
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Rather than just `unlimited' allow the integer set commands (or command
options) to define arbitrary keywords for the user to use, removing
hardcoded arrangements for the `unlimited' keyword.
Remove the confusingly named `var_zinteger', `var_zuinteger' and
`var_zuinteger_unlimited' `set'/`show' command variable types redefining
them in terms of `var_uinteger', `var_integer' and `var_pinteger', which
have the range of [0;UINT_MAX], [INT_MIN;INT_MAX], and [0;INT_MAX] each.
Following existing practice `var_pinteger' allows extra negative values
to be used, however unlike `var_zuinteger_unlimited' any number of such
values can be defined rather than just `-1'.
The "p" in `var_pinteger' stands for "positive", for the lack of a more
appropriate unambiguous letter, even though 0 obviously is not positive;
"n" would be confusing as to whether it stands for "non-negative" or
"negative".
Add a new structure, `literal_def', the entries of which define extra
keywords allowed for a command and numerical values they correspond to.
Those values are not verified against the basic range supported by the
underlying variable type, allowing extra values to be allowed outside
that range, which may or may not be individually made visible to the
user. An optional value translation is possible with the structure to
follow the existing practice for some commands where user-entered 0 is
internally translated to UINT_MAX or INT_MAX. Such translation can now
be arbitrary. Literals defined by this structure are automatically used
for completion as necessary.
So for example:
const literal_def integer_unlimited_literals[] =
{
{ "unlimited", INT_MAX, 0 },
{ nullptr }
};
defines an extra `unlimited' keyword and a user-visible 0 value, both of
which get translated to INT_MAX for the setting to be used with.
Similarly:
const literal_def zuinteger_unlimited_literals[] =
{
{ "unlimited", -1, -1 },
{ nullptr }
};
defines the same keyword and a corresponding user-visible -1 value that
is used for the requested setting. If the last member were omitted (or
set to `{}') here, then only the keyword would be allowed for the user
to enter and while -1 would still be used internally trying to enter it
as a part of a command would result in an "integer -1 out of range"
error.
Use said error message in all cases (citing the invalid value requested)
replacing "only -1 is allowed to set as unlimited" previously used for
`var_zuinteger_unlimited' settings only rather than propagating it to
`var_pinteger' type. It could only be used for the specific case where
a single extra `unlimited' keyword was defined standing for -1 and the
use of numeric equivalents is discouraged anyway as it is for historical
reasons only that they expose GDB internals, confusingly different
across variable types. Similarly update the "must be >= -1" Guile error
message.
Redefine Guile and Python parameter types in terms of the new variable
types and interpret extra keywords as Scheme keywords and Python strings
used to communicate corresponding parameter values. Do not add a new
PARAM_INTEGER Guile parameter type, however do handle the `var_integer'
variable type now, permitting existing parameters defined by GDB proper,
such as `listsize', to be accessed from Scheme code.
With these changes in place it should be trivial for a Scheme or Python
programmer to expand the syntax of the `make-parameter' command and the
`gdb.Parameter' class initializer to have arbitrary extra literals along
with their internal representation supplied.
Update the testsuite accordingly.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
On s390x-linux, I run into:
...
(gdb) python print(u[u_fields[0]])^M
99^M
(gdb) PASS: gdb.python/py-value-cc.exp: u's first field via field
python print(u[u_fields[1]])^M
0 '\000'^M
(gdb) FAIL: gdb.python/py-value-cc.exp: u's second field via field
...
There's a var u of this type:
...
union U {
int a;
char c;
};
...
and after assigning 99 to u.a, the test-case expects u.c to contain 99 (which
it does on x86_64), but instead it contains 0.
Fix this by instead assigning 0x63636363, to ensure that u.c == 99 for both
little and big endian.
Tested on x86_64-linux and s390x-linux.
This changes skip_tui_tests to invert the sense, and renames it to
allow_tui_tests. It also rewrites this function to use the output of
"gdb --configuration", and it adds a note about the state of the TUI
to that output.
This changes skip_hw_breakpoint_tests to invert the sense, and renames
it to allow_hw_breakpoint_tests. This also converts some tests to use
"require" -- I missed this particular check in the first series.
This changes skip_cplus_tests to invert the sense, and renames it to
allow_cplus_tests. This one also converts skip_stl_tests to
allow_stl_tests, as that was convenient to do at the same time.
mi_skip_python_tests was necessary because skip_python_tests used the
running gdb, and so needed to know what prompt to expect. Now that
skip_python_tests has been rewritten, mi_skip_python_tests is no
longer needed.