I have had so much success in adjusting Concertina II to achieve my goals more fully than I had thought possible... that I now think that it may be possible to proceed from Concertina II to a design which gets rid of the
one feature of Concertina II that has been the most controversial.
Yes, I think that I could actually do without block structure.
What would Concertina III look like?
Well, the basic instruction set would be similar to that of Concertina II. But the P bits would be taken out of the operate instructions, and so
would the option of replacing a register specification by a pseudo-
immediate pointer.
The tiny gaps between the opcodes of some instructions to squeeze out
space for block headers would be removed.
But the big spaces for the shortest block header prefixes would be what is used for doing without headers.
Instead of a block header being used to indicate code consisting of variable-length instructions, variable-length instructions would be
contained within a sequence of pairs of 32-bit instructions of this form:
11110xx(17 bits)(8 bits)
11111x(9 bits)(17 bits)
Instructions could be 17 bits long, 34 bits long, 51 bits long, and so on, any multiple of 17 bits in length.
In the first instruction slot of the pair, the two bits xx indicate, for
the two 17-bit regions of the variable-length instruction area that start
in it, if they are the first 17-bit area of an instruction. The second instruction slot only contains the start of one 17-bit area, so only one
bit x is needed. Since 17 is an odd number, this meshes perfectly with the fact that the 17-bit area which straddles both words isn't split evenly,
but rather one extra bit of it is in the second 32-bit instruction slot.
I had been hoping to use 18-bit areas instead, but after re-checking my calculations, I found there just wasn't enough opcode space.
Long instructions that contain immediates would not be part of variable- length instruction code. Instead, their lengths would be multiples of 32 bits, making them part of ordinary code with 32-bit instructions.
Their form would be like this:
32-bit immediate:
1010(12 bits)(16 bits)
10111(11 bits)(16 bits)'
where the first parenthesized area belongs to the instruction, and the
second to the immediate.
48-bit immediate:
1010(12 bits)(16 bits)
10110(11 bits)(16 bits)
10111(11 bits)(16 bits)
64-bit immediate:
1010(12 bits)(16 bits)
10110(3 bits)(24 bits)
10111(3 bits)(24 bits)
Since the instruction, exclusive of the immediate, really only needs 12
bits - 7 bit opcode, and 5 bit destination register - in each case there's enough additional space for the instruction to begin with a few bits that indicates its length, so that decoding is simple.
The scheme is not really space-efficient.
But the question that I really have is... is this really any better than having block headers? Or is it just as bad, just as complicated?
How about, say, 16/32/48/64/96:
xxxx-xxxx-xxxx-xxx0 //16 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxyy-yyy1 //32 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xx11-1111 //64/48/96 bit prefix
Already elaborate enough...
On Sun, 31 Aug 2025 13:12:52 -0500, BGB wrote:
How about, say, 16/32/48/64/96:
xxxx-xxxx-xxxx-xxx0 //16 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxyy-yyy1 //32 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xx11-1111 //64/48/96 bit prefix
Already elaborate enough...
Thank you for your interesting suggestions.
I'm envisaging Concertina III as closely based on Concertina II, with only minimal changes.
Like Concertina II, it is to meet the overriding condition that
instructions do not have to be decoded sequentially. This means that
whenever an instruction, or group of instructions, spans more than 32
bits, the 32 bit areas of the instruction, other than the first, must
begin with a combination of bits that says "don't decode me".
The first 32 bits of an instruction get decoded directly, and then trigger and control the decoding of the rest of the instruction.
This has the consequence that any immediate value that is 32 bits or more
in length has to be split up into smaller pieces; this is what I really
don't like about giving up the block structure.
John Savard
On 9/2/2025 4:15 AM, John Savard wrote:
On Sun, 31 Aug 2025 13:12:52 -0500, BGB wrote:
How about, say, 16/32/48/64/96:
xxxx-xxxx-xxxx-xxx0 //16 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxyy-yyy1 //32 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xx11-1111 //64/48/96 bit prefix
Already elaborate enough...
Thank you for your interesting suggestions.
I'm envisaging Concertina III as closely based on Concertina II, with
only
minimal changes.
Like Concertina II, it is to meet the overriding condition that
instructions do not have to be decoded sequentially. This means that
whenever an instruction, or group of instructions, spans more than 32
bits, the 32 bit areas of the instruction, other than the first, must
begin with a combination of bits that says "don't decode me".
The first 32 bits of an instruction get decoded directly, and then
trigger
and control the decoding of the rest of the instruction.
This has the consequence that any immediate value that is 32 bits or more
in length has to be split up into smaller pieces; this is what I really
don't like about giving up the block structure.
Note that tagging like that described does still allow some amount of parallel decoding, since we still have combinatorial logic. Granted, scalability is an issue.
As can be noted, my use of jumbo-prefixes for large immediate values
does have the property of allowing reusing 32-bit decoders for 64-bit
and 96-bit instructions. In most cases, the 64-bit and 96-bit encodings don't change the instruction being decoded, but merely extend it.
Some internal plumbing is needed to stitch the immediate values together though, typically:
We have OpA, OpB, OpC
DecC gets OpC, and JBits from OpB
DecB gets OpB, and JBits from OpA
DecA gets OpA, and 0 for JBits.
In my CPU core, I had a few times considered changing how decoding
worked, to either reverse or right-align the instruction block to reduce
the amount of MUX'ing needed in the decoder. If going for right-
alignment, then DecC would always go to Lane1, DecB to Lane2, and DecA
to Lane3.
Can note that for immediate-handling, the Lane1 decoder produces the low
33 bits of the result. If a decoder has a jumbo prefix and is itself
given a jumbo-prefix, it assumes a 96 bit encoding and produces the
value for the high 32 bits.
At least in my designs, I only account for 33 bits of immediate per
lane. Instead, when a full 64-bit immediate is encoded, its value is assembled in the ID2/RF stage.
Though, admittedly my CPU core design did fall back to sequential
execution for 16-bit ops, but this was partly for cost reasons.
For BJX2/XG1 originally, it was because the instructions couldn't use
WEX tagging, but after adding superscalar it was because I would either
need multiple parallel 16-bit decoders, or to change how 16 bit ops were handled (likely using a 16->32 repacker).
So, say:
IF stage:
Retrieve instruction from Cache Line;
Determine fetch length:
XG1/XG2 used explicit tagging;
XG3 and RV use SuperScalar checks.
Run repackers.
Currently both XG3 and RISC-V 48-bit ops are handled by repacking.
Decode Stage:
Decode N parallel 32-bit ops;
Prefixes route to the corresponding instructions;
Any lane holding solely a prefix goes NOP.
For a repacker, it would help if there were fairly direct mappings
between the 16-bit and 32-bit ops. Contrary to claims, RVC does not
appear to fit such a pattern. Personally, there isn't much good to say
about RVC's encoding scheme, as it is very much ad-hoc dog chew.
The usual claim is more that it is "compressed" in that you can first generate a 32-bit op internally and "squish" it down into a 16-bit form
if it fits. This isn't terribly novel as I see it. Repacking RVC has
similar problems to decoding it directly, namely that for a fair number
of instructions, nearly each instruction has its own idiosyncratic
encoding scheme (and you can't just simply shuffle some of the bits
around and fill others with 0s and similar to arrive back at a valid RVI instruction).
Contrast, say, XG3 is mostly XG2 with the bits shuffled around; though
there were some special cases made in the decoding rules. Though,
admittedly I did do more than the bare minimum here (to fit it into the
same encoding space as RV), mostly as I ended up going for a "Dog Chew Reduction" route rather than merely a "do the bare minimum bit-shuffling needed to make it fit".
For better or worse, it effectively made XG3 its own ISA as far as BGBCC
is concerned. Even if in theory I could have used repacking, the
original XG1/XG2 emitter logic is a total mess. It was written
originally for fixed-length 16-bit ops, so encodes and outputs
instructions 16 bits at a time (using big "switch()" blocks, but the
RISC-V and XG3 emitters also went this path; as far as BGBCC is
concerned, it is treating XG3 as part of RISC-V land).
Both the CPU core and also JX2VM handle it by repacking to XG2 though.
For the XG3VM (userland only emulator for now), it instead decodes XG3 directly, with decoders for XG3, RVI, and RVC.
Had noted the relative irony that despite XG3 having a longer
instruction listing (than RVI) it still ends up with a slightly shorter decoder.
Some of this has to deal with one big annoyance of RISC-V's encoding
scheme:
Its inconsistent and dog-chewed handling of immediate and displacement values.
Though, for mixed-output, there are still a handful of cases where RVI encodings can beat XG3 encodings, mostly involving cases where the RVI encodings have a slightly larger displacement.
In compiler stats, this seems to mostly affect:
LB, LBU, LW, LWU
SB, SW
ADDI, ADDIW, LUI
The former:
Well, unscaled 12-bit beats scaled 10-bit for 8 and 16-bit load/store.
ADDI: 12b > 10b
LUI: because loading a 32-bit value of the form XXXXX000 does happen sometimes it seems.
Instruction counts are low enough that a "pure XG3" would likely result
in Doom being around 1K larger (the cases where RVI ops are used would
need a 64-bit jumbo-encoding in XG3).
Though, the relative wonk of handling ADD in XG1/XG2/XG3 by using
separate Imm10u/Imm10n encodings, rather than an Imm10s, does have merit
in that this effectively gives it an Imm11s encoding; and ADD is one of
the main instructions that tends to be big-immediate-heavy (and in early design it was a close race between ADD ImmU/ImmN, vs ADD/SUB ImmU, but
the current scheme has a tiny bit more range, albeit SUB-ImmU could have possibly avoided the need for an ImmN case).
So, say:
ADD: Large immediate heavy.
SUB: Can reduce to ADD in the immediate case.
AND: Preferable to have signed immediate values;
Not common enough to justify the ImmU/ImmN scheme.
OR: Almost exclusively positive;
XOR: Almost exclusively positive.
Load/Store displacements are very lopsided in the positive direction.
Disp10s slightly beats Disp10u though.
More negative displacements than 512..1023.
XG1 had sorta hacked around it by:
Disp9u, Disp5n
Disp10u, Disp6n was considered, but didn't go that way.
Disp10s was at least, "slightly less ugly",
Even if Disp10u+Disp6n would have arguably been better
for code density.
Or, cough, someone could maybe do signed load/store displacements like:
000..110: Positive
111: Negative
So, Disp10as is 0..1791, -256..-1
Would better fit the statistical distribution, but... Errm...
...
John Savard
On Sun, 31 Aug 2025 13:12:52 -0500, BGB wrote:
How about, say, 16/32/48/64/96:
xxxx-xxxx-xxxx-xxx0 //16 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxyy-yyy1 //32 bit
xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xx11-1111 //64/48/96 bit prefix
Already elaborate enough...
Thank you for your interesting suggestions.
I'm envisaging Concertina III as closely based on Concertina II, with only minimal changes.
Like Concertina II, it is to meet the overriding condition that
instructions do not have to be decoded sequentially. This means that whenever an instruction, or group of instructions, spans more than 32
bits, the 32 bit areas of the instruction, other than the first, must
begin with a combination of bits that says "don't decode me".
The first 32 bits of an instruction get decoded directly, and then trigger and control the decoding of the rest of the instruction.
This has the consequence that any immediate value that is 32 bits or more
in length has to be split up into smaller pieces; this is what I really don't like about giving up the block structure.
John Savard--- Synchronet 3.21a-Linux NewsLink 1.2
Lest one thinks this results in serial decoding, consider that the
pattern decoder is 40 gates (just larger than 3-flip-flops) so one can
afford to put this pattern decoder on every word in the inst- buffer
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,066 |
Nodes: | 10 (0 / 10) |
Uptime: | 174:29:37 |
Calls: | 13,708 |
Calls today: | 2 |
Files: | 186,949 |
D/L today: |
8,613 files (2,371M bytes) |
Messages: | 2,415,958 |