Add support for the SME2 ADD. SUB, FADD and FSUB instructions.
SUB and FSUB have the same form as ADD and FADD, except that
ADD also has a 2-operand accumulating form.
The 64-bit ADD/SUB instructions require FEAT_SME_I16I64 and the
64-bit FADD/FSUB instructions require FEAT_SME_F64F64.
These are the first instructions to have tied register list
operands, as opposed to tied single registers.
The parse_operands change prevents unsuffixed Z registers (width==-1)
from being treated as though they had an Advanced SIMD-style suffix
(.4s etc.). It means that:
Error: expected element type rather than vector type at operand 2 -- `add za\.s\[w8,0\],{z0-z1}'
becomes:
Error: missing type suffix at operand 2 -- `add za\.s\[w8,0\],{z0-z1}'
SME2 adds lookup table instructions for quantisation. They use
a new lookup table register called ZT0.
LUTI2 takes an unsuffixed SVE vector index of the form Zn[<imm>],
which is the first time that this syntax has been used.
Implementation-wise, the main things to note here are:
- the WHILE* instructions have forms that return a pair of predicate
registers. This is the first time that we've had lists of predicate
registers, and they wrap around after register 15 rather than after
register 31.
- the predicate-as-counter WHILE* instructions have a fourth operand
that specifies the vector length. We can treat this as an enumeration,
except that immediate values aren't allowed.
- PEXT takes an unsuffixed predicate index of the form PN<n>[<imm>].
This is the first instance of a vector/predicate index having
no suffix.
SME2 adds LD1 and ST1 variants for lists of 2 and 4 registers.
The registers can be consecutive or strided. In the strided case,
2-register lists have a stride of 8, starting at register x0xxx.
4-register lists have a stride of 4, starting at register x00xx.
The instructions are predicated on a predicate-as-counter register in
the range pn8-pn15. Although we already had register fields with upper
bounds of 7 and 15, this is the first plain register operand to have a
nonzero lower bound. The patch uses the operand-specific data field
to record the minimum value, rather than having separate inserters
and extractors for each lower bound. This in turn required adding
an extra bit to the field.
SME2 defines new MOVA instructions for moving multiple registers
to and from ZA. As with SME, the instructions are also available
through MOV aliases.
One notable feature of these instructions (and many other SME2
instructions) is that some register lists must start at a multiple
of the list's size. The patch uses the general error "start register
out of range" when this constraint isn't met, rather than an error
specifically about multiples. This ensures that the error is
consistent between these simple consecutive lists and later
strided lists, for which the requirements aren't a simple multiple.
SME2 adds a new format for the existing SVE predicate registers:
predicates as counters rather than predicates as masks. In assembly
code, operands that interpret predicates as counters are written
pn<N> rather than p<N>.
This patch adds support for these registers and extends some
existing instructions to support them. Since the new forms
are just a programmer convenience, there's no need to make them
more restrictive than the earlier predicate-as-mask forms.
Some SME2 instructions operate on a range of consecutive ZA vectors.
This is indicated by syntax such as:
za[<Wv>, <imml>:<immh>]
Like with the earlier vgx2 and vgx4 support, we get better error
messages if the parser allows all ZA indices to have a range.
We can then reject invalid cases during constraint checking.
Many SME2 instructions operate on groups of 2 or 4 ZA vectors.
This is indicated by adding a "vgx2" or "vgx4" group size to the
ZA index. The group size is optional in assembly but preferred
for disassembly.
There is not a binary distinction between mnemonics that have
group sizes and mnemonics that don't, nor between mnemonics that
take vgx2 and mnemonics that take vgx4. We therefore get better
error messages if we allow any ZA index to have a group size
during parsing, and wait until constraint checking to reject
invalid sizes.
A quirk of the way errors are reported means that if an instruction
is wrong both in its qualifiers and its use of a group size, we'll
print suggested alternative instructions that also have an incorrect
group size. But that's a general property that also applies to
things like out-of-range immediates. It's also not obviously the
wrong thing to do. We need to be relatively confident that we're
looking at the right opcode before reporting detailed operand-specific
errors, so doing qualifier checking first seems resonable.
SME2 adds various new fields that are similar to
AARCH64_OPND_SME_ZA_array, but are distinguished by the size of
their offset fields. This patch adds _off4 to the name of the
field that we already have.
Until now, binutils has supported register ranges such
as { v0.4s - v3.4s } as an unofficial shorthand for
{ v0.4s, v1.4s, v2.4s, v3.4s }. The SME2 ISA embraces this form
and makes it the preferred disassembly. It also embraces wrapped
lists such as { z31.s - z2.s }, which is something that binutils
didn't previously allow.
The range form was already binutils's preferred disassembly for 3- and
4-register lists. This patch prefers it for 2-register lists too.
The patch also adds support for wrap-around.
SME2 has instructions that accept strided register lists,
such as { z0.s, z4.s, z8.s, z12.s }. The purpose of this
patch is to extend binutils to support such lists.
The parsing code already had (unused) support for strides of 2.
The idea here is instead to accept all strides during parsing
and reject invalid strides during constraint checking.
The SME2 instructions that accept strided operands also have
non-strided forms. The errors about invalid strides therefore
take a bitmask of acceptable strides, which allows multiple
possibilities to be summed up in a single message.
I've tried to update all code that handles register lists.
Some FLD_imm* suffixes used a counting scheme such as FLD_immN,
FLD_immN_2, FLD_immN_3, etc., while others used the lsb as the
suffix. The latter seems more mnemonic, and was a big help
in doing the SME2 work.
Similarly, the _10 suffix on FLD_SME_size_10 was nonobvious.
Presumably it indicated a 2-bit field, but it actually starts
in bit 22.
Quite a lot of SME2 instructions have an opcode bit that selects
between 32-bit and 64-bit forms of an instruction, with the 32-bit
forms being part of base SME2 and with the 64-bit forms being part
of an optional extension. It's nevertheless useful to have a single
opcode entry for both forms since (a) that matches the ISA definition
and (b) it tends to improve error reporting.
This patch therefore adds a libopcodes function called
aarch64_cpu_supports_inst_p that tests whether the target
supports a particular instruction. In future it will depend
on internal libopcodes routines.
This patch renames the OP_SME_* macros in aarch64-tbl.h so that
they follow the same scheme as the OP_SVE_* ones. It also uses
OP_SVE_ as the prefix, since there is no real distinction between
the SVE and SME uses of qualifiers: a macro defined for one can
be useful for the other too.
If an instruction has invalid qualifiers, GAS would report the
error against the final opcode entry that got to the qualifier-
checking stage. It seems better to report the error against
the opcode entry that had the closest match, just like we
pick the closest match within an opcode entry for the
"did you mean this?" message.
This patch adds the number of invalid operands as an
argument to AARCH64_OPDE_INVALID_VARIANT and then picks the
AARCH64_OPDE_INVALID_VARIANT with the lowest argument.
AARCH64_OPDE_REG_LIST took a single operand that specified the
expected number of registers. However, there are quite a few
SME2 instructions that have both 2-register forms and (separate)
4-register forms. If the user tries to use a 3-register list,
it isn't obvious which opcode entry they meant. Saying that we
expect 2 registers and saying that we expect 4 registers would
both be wrong.
This patch therefore switches the operand to a bitfield. If a
AARCH64_OPDE_REG_LIST is reported against multiple opcode entries,
the patch ORs up the expected lengths.
This has no user-visible effect yet. A later patch adds more error
strings, alongside tests that use them.
SVE register lists were classified as SVE_REG, since there had been
no particular reason to separate them out. However, some SME2
instructions have tied register list operands, and so we need to
distinguish registers and register lists when checking whether two
operands match.
Also, the register list operands used a general error message,
even though we already have a dedicated error code for register
lists that are the wrong length.
libopcodes currently reports out-of-range registers as a general
AARCH64_OPDE_OTHER_ERROR. However, this means that each register
range needs its own hard-coded string, which is a bit cumbersome
if the range is determined programmatically. This patch therefore
adds a dedicated error type for out-of-range errors.
In SME, the vector select register had to be in the range
w12-w15, so it made sense to enforce that during parsing.
However, SME2 adds instructions for which the range is
w8-w11 instead.
This patch therefore moves the range check from the parsing
stage to the constraint-checking stage.
Also, the previous error used a capitalised range W12-W15,
whereas other register range errors used lowercase ranges
like p0-p7. A quick internal poll showed a preference for
the lowercase form, so the patch uses that.
The patch uses "selection register" rather than "vector
select register" so that the terminology extends more
naturally to PSEL.
This patch moves the range checks on ZA vector select offsets from
gas to libopcodes. Doing the checks there means that the error
messages contain the expected range. It also fits in better
with the error severity scheme, which becomes important later.
(This is because out-of-range indices are treated as more severe than
syntax errors, on the basis that parsing must have succeeded if we get
to the point of checking the completed opcode.)
The patch also adds a new check_za_access function for checking
ZA accesses. That's a bit over the top for one offset check, but the
function becomes more complex with later patches.
sme-9-illegal.s checked for an invalid .q suffix using:
psel p1, p15, p3.q[w15]
but this is doubly invalid because it misses the immediate part
of the index. The patch keeps that test but adds another with
a zero index, so that .q is the only thing wrong.
The aarch64-tbl.h change includes neatening up the backslash
positions.
A later patch moves the range checking for ZA vector select
offsets from gas to libopcodes. That in turn requires the
immediate field to be big enough to support all parsed values.
This shouldn't be a particularly size-sensitive structure,
so there should be no memory problems with doing this.
za_tile_vector is also used for indexing ZA as a whole, rather than
just for indexing tiles. The former is more common than the latter
in SME2, so this patch generalises the name to "indexed_za".
The patch also names the associated structure, so that later patches
can reuse it during parsing.
We already treat the ZA tiles ZA0-ZA15 as registers. This patch
does the same for ZA itself. parse_sme_zero_mask can then parse
ZA tiles and ZA in the same way, through parsed_type_reg.
One important effect of going through parsed_type_reg (in general)
is that it allows ZA to take qualifiers. This is necessary for many
SME2 instructions.
However, to support existing unqualified uses of ZA, parse_reg_with_qual
needs to treat the qualiier as optional. Hopefully the net effect is
to give better error messages, since now that SME2 makes "za.<T>"
valid in some contexts, it might be natural to use it (incorrectly)
in ZERO too.
While there, the patch also tweaks the error messages for invalid
ZA tiles, to try to make some cases more specific.
For now, parse_sme_za_array just uses parse_reg, rather than
parse_typed_reg/parse_reg_with_qual. A later patch consolidates
the parsing further.
This patch makes all SME instructions use F_STRICT, so that qualifiers
have to be provided explicitly rather than being inferred from other
operands. The main change is to move the qualifier setting from the
operand-level decoders to the opcode level.
This is one step towards consolidating the ZA parsing code and
extending it to handle SME2.
In the register-index forms of PRFM, the unallocated prefetch opcodes
24-31 have been reused for the encoding of the new RPRFM instruction.
The PRFM opcode space is now capped at 23 for these forms. The other
forms of PRFM are unaffected.
Most extension flags are named after the associated architectural
FEAT_* flags, but sme-i64 and sme-f64 were exceptions. This patch
adds sme-i16i64 and sme-f64f64 aliases, but keeps the old names too
for compatibility.
With VexVVVV only being boolean, the SSE shift-by-immediate instructions
don't need special casing anymore for SSE2AVX handling. Simplify the two
respective templates. (No change to generated tables.)
With the SDM long having dropped the NDS/NDD/DDS concept of identifying
encoding variants, we can finally do away with this concept as well. Of
the few consumers of the attribute, only an assertion was still checking
for a particular value, which we don't really need to retain.
When touching lines anyway, modernize other aspects as well. This often
improves similarity to adjacent lines.
The function has accumulated a number of special cases for no real
reason. Some were necessary because insn attributes (SwapSources in
particular) weren't suitably utilized instead. Note that the addition of
SwapSources actually increases consistency among the templates: Like
others which already have the attribute, these are all insns where the
VEX.VVVV-encoded register comes first (or last when looking at the SDM).
Note that the vexvvvv attribute now has merely boolean meaning anymore,
in line with the SDM long having dropped the NDS/NDD/DDS concept of
identifying encoding variants. The fallout will be taken care of
subsequently, though, to not further clutter the change here.
As to the TILEZERO special case: If more instructions like this
appeared, a new attribute would likely be the way to go. But as long as
it's only a single insn, going from the mnemonic is cheaper.
This reverts commit 92d450c79a.
Accessing these local var structs using a volatile qualified pointer
may indeed read the object, but I don't think changed values are
guaranteed to be written back to the object unless the actual object
is declared volatile. That would probably slow down i386 disassembly
unacceptably.
The following commit added libopcodes styling for m68k:
commit c22ff44927
Date: Tue Feb 14 18:07:19 2023 +0100
opcodes: style m68k disassembler output
but didn't set disassemble_info::created_styled_output in
disassemble.c, which is needed in order for GDB to start using the
libopcodes based styling.
This commit fixes this small oversight. GDB now styles correctly.
The feature isn't universally available on 64-bit CPUs.
Note that in i386-gen.c:isa_dependencies[] I'm only adding it to models
where I'm certain the functionality exists. For Nocona and Core I'm
uncertain in particular.
While MOV to/from segment register as well as selector storing insns
already permit 32- and 64-bit GPR operands, selector loading insns and
ARPL do not. Split templates accordingly.
For shifts (but not ordinary rotates) and other cases where an immediate
describes e.g. a bit count or position, allowing negative operands is at
best confusing. An extreme example would be the two rotate-through-carry
insns, where a negative value would _not_ mean rotating the
corresponding number of bits in the other direction. To refuse such,
give meaning to the combination of Imm8 and Imm8S in templates (so far
these weren't used together anywhere). The issue was with
smallest_imm_type() blindly setting .imm8 for signed numbers determined
to fit in a byte.
VPROT{B,W,D,Q} is a little special: The rotate count there is a signed
quantity, so Imm8 is replaced by Imm8S. Adjust affected testcases
accordingly as well.
Another small adjustment to the testsuite is necessary: AAM and AAD were
never sensible to use with 0xffffff90 operands. This should have been an
error.
Just like we suppress emitting REX.W for e.g. MOV from/to segment
register, there's also no need for it for LAR and LSL - these can only
ever return 32-bit values and hence always zero-extend their results
anyway.
While there also drop the redundant Word from the first operand of
the second template each - this is already implied by Reg16.
In 64-bit mode BT can have REX.W or a data size prefix dropped in
certain cases. Outside of 64-bit mode all 4 insns can have the data
size prefix dropped in certain cases.