It has a visualisation of the element selection stuff at the end.
camel-cdr 4 hours ago [-]
I like this document, but it seems to be written with a very specific implementation in mind.
You can implement both regular SIMD ISAs and scalable SIMD/Vector ISAs in a "Vector processor" style and both in a regular SIMD style.
shash 3 hours ago [-]
It _is_ RISC-V Vector extensions, so a very specific ISA in mind at the very least. There's another extension (not ratified I think) called Packed SIMD for RISC-V, but this isn't about that.
camel-cdr 2 hours ago [-]
The name, yes, but going by name is a bad idea as the V in AVX also stands for Vector.
BTW, you'll be disappointed if you think of the P extension as something like SSE/AVX. The target for it is way lower power/perf, like a stripped-down MMX.
My point was about the underlying hardware implementation, specifically:
> "As shown in Figure 1-3, array processors scale performance spatially by replicating processing elements, while vector processors scale performance temporally by streaming data through pipelined functional units"
Applies to the hadware implementation, not the ISA, which is not made clear by the text.
You can implement AVX-512 with smaler data path then register width and "scale performance temporally by streaming data through pipelined functional units". Zen4 is a simple example of this, but there is nothing stopping you from implementing AVX-512 on top of heavily temporaly pipelined 64-bit wide execution units.
Similarly, you can implement RVV with a smaller data path than VLEN, but you can also implement it as a bog-standard SIMD processor.
The only thing that slightly complicates the comparison is LMUL, but it is fundamentally equivilant to unrolling.
The substantial difference between Vector and SIMD ISAs is imo only the existence of a vl-based predication mechanism.
If a SIMD ISA has a fixed register width or not, allowing you to write vector-length agnostic code, is an independent dimension of the ISA design. E
.g. the Cray-1 was without a doubt a Vector processor, but the vector registers on all compatible platforms had the exact same length. It did, however, have the mentioned vl-based predication mechanism.
You could take AVX10/128, AVX10/256 and AVX10/512, overlap their instruction encodings, and end up with a scalable SIMD ISA, for which you can write vector length agnostic code, but that doesn't make it a Vector ISA any more than it was before.
gchadwick 5 hours ago [-]
Only taken a quick skim but this looks like solid material!
RISC-V Vector is definitely tricky to get a handle on, especially if you just read the architecture documentation (which is to be expected really, good specification for an architecture isn't compatible with a useful beginners guide). I found I needed to look at some presentations given by various members of the vector working group to get a good grasp of the principles.
There's been precious little material beyond the specification and some now slightly old slide decks so this is a great contribution.
veltas 1 hours ago [-]
Problem RISC-V has is there's no middle-ground.
The specification for an architecture is meant to be useful to anyone writing assembly, not just to people implementing the spec. Case in point x86 manuals aren't meant for Intel, they're meant for Intel's customers.
There is a lot of cope re the fact RISC-V's spec is particularly hard to use for writing assembly or understanding the software model.
If the spec isn't a 'manual' then where's the manual? If there's just no manual then that's a deficiency. If we only have 'tutorial's that's bad as well, a manual is a good reference for an experienced user, and approachable to a slightly aware beginner (or a fresh beginner with experience in other arch's); a tutorial is too verbose to be useful as a regular reference.
Either the spec should have read (and still could read) more like a useful manual, or a useful manual needs to be provided.
geokon 5 hours ago [-]
On a high level, do I understand correctly that SIMD is close to how the hardware works, while Vector Processor is more of an abstraction? The "Strip Mining" part looks like this translation to something SIMD-like. I seems like it's a good abstraction layers, but there is an implicit compilation step right? (making the "assembly" more easily run on different actual hardware)
Someone 4 hours ago [-]
> On a high level, do I understand correctly that SIMD is close to how the hardware works, while Vector Processor is more of an abstraction?
Not quite. It still is the same “process whatever number of items you can in parallel, decrease count by that, repeat if necessary“ loop.
RISC-V decided to move the “decrease count by that, repeat if necessary” part into hardware, making the entire phrase “how the hardware works”.
Makes for shorter and nicer assembly. SIMD without it first has to query the CPU to find out how much parallelization it can handle (once) and do the “decrease count by that, repeat if necessary” part on the main CPU.
dzaima 3 hours ago [-]
RVV still very much requires you to write a manual code/assembly loop doing the "compute how many elements can be handled, decrease count by that, repeat if necessary" thing. All it does is make it slightly less instructions to do so (and also allows handling a loops tail in the same loop while at it).
Joker_vD 3 hours ago [-]
Yeah, except you don't need to rewrite that code every time a new AVX drops, and also don't need to bother to figure out what to do on older CPUs.
IIRC libc for x64 has several implementations of memcpy/memmov/strlen/etc. for different SSE/AVX extensions, which all get compiled in and shipped to your system; when libc is loaded for the first time, it figures out what is the latest extension the CPU it's running on actually supports and then patches its exports to point to the fastest working implementations.
dzaima 2 hours ago [-]
You don't need to write a new loop every time a new vector size drops, but over time you'll still get more and more cases of wanting to write multiple copies of loops to take advantage of new instructions; there are already a good bunch of extensions of RVV (e.g. Zvbb has a good couple that are encounterable in general-purpose code), and many more to come (e.g. if we ever get vrgathers that don't scale quadratically with LMUL, some mask ops, and who knows what else will be understood as obviously-good-to-have in the future).
This kinda (though admittedly not entirely) balances out the x86 problem - sure, you have to write a new loop to take advantage of wider vector registers, but you often want to do that anyway - on SSE→AVX(2) you get to take advantage of non-destructive ops, all inline loads being unaligned, and a couple new nice instrs; on AVX2→AVX512 you get a ton of masking stuff, non-awful blends, among others.
RVV gets an advantage here largely due to just simply being a newer ISA, at a time where it is actually reasonably possible for even baseline hardware to support expensive compute instrs, complex shuffles, all unaligned mem ops (..though, actually, with RISC-V/RVV not mandating unaligned support (and allowing it to be extremely-slow even when supported) this is another thing you may want to write multiple loops for), and whatnot; whereas x86 SSE2 had to work on whatever could exist 20 years ago, and as such made respective compromises.
In some edge-cases the x86 approach can even be better - if you have some code that benefits from having different versions depending on hardware vector size (e.g. needs to use vrgather, or processes some fixed-size data that'd be really bad to write in a scalable way), on RVV you may end up needing to write a loop for each combination of VLEN and extension-set (i.e. a quadratic number of cases), whereas on x86 you only need to have a version of the loop for each desired extension-set.
zozbot234 2 hours ago [-]
I wonder how this broadly compares with the new ARM64 SVE. Which is easier to adopt?
sylware 5 days ago [-]
owww! microsoft github becoming a web app (aka only for the whatng carte web engines), it is impossible to have a 'classic web' look a the repo. Must clone it now... thx microsoft, again.
noodlesUK 5 hours ago [-]
This is great!
I’d love a similar document for ARM NEON as well.
Joker_vD 3 hours ago [-]
The example in 1.13 probably would work better if the example with scalar instructions actually had, you know, more instructions than the one with the vector instructions. Otherwise, it's very taxing to read things like "static instruction count and dynamic instruction count both drop dramatically" when your eyes tell you that no, the static instruction count has actually increased.
Also, where does that 38-byte stride even comes from? That number is not even divisible by 4, nevermind by 8!
Rendered at 14:08:42 GMT+0000 (Coordinated Universal Time) with Vercel.
https://blog.timhutt.co.uk/riscv-vector/
It has a visualisation of the element selection stuff at the end.
You can implement both regular SIMD ISAs and scalable SIMD/Vector ISAs in a "Vector processor" style and both in a regular SIMD style.
My point was about the underlying hardware implementation, specifically:
> "As shown in Figure 1-3, array processors scale performance spatially by replicating processing elements, while vector processors scale performance temporally by streaming data through pipelined functional units"
Applies to the hadware implementation, not the ISA, which is not made clear by the text.
You can implement AVX-512 with smaler data path then register width and "scale performance temporally by streaming data through pipelined functional units". Zen4 is a simple example of this, but there is nothing stopping you from implementing AVX-512 on top of heavily temporaly pipelined 64-bit wide execution units.
Similarly, you can implement RVV with a smaller data path than VLEN, but you can also implement it as a bog-standard SIMD processor. The only thing that slightly complicates the comparison is LMUL, but it is fundamentally equivilant to unrolling.
The substantial difference between Vector and SIMD ISAs is imo only the existence of a vl-based predication mechanism. If a SIMD ISA has a fixed register width or not, allowing you to write vector-length agnostic code, is an independent dimension of the ISA design. E .g. the Cray-1 was without a doubt a Vector processor, but the vector registers on all compatible platforms had the exact same length. It did, however, have the mentioned vl-based predication mechanism. You could take AVX10/128, AVX10/256 and AVX10/512, overlap their instruction encodings, and end up with a scalable SIMD ISA, for which you can write vector length agnostic code, but that doesn't make it a Vector ISA any more than it was before.
RISC-V Vector is definitely tricky to get a handle on, especially if you just read the architecture documentation (which is to be expected really, good specification for an architecture isn't compatible with a useful beginners guide). I found I needed to look at some presentations given by various members of the vector working group to get a good grasp of the principles.
There's been precious little material beyond the specification and some now slightly old slide decks so this is a great contribution.
The specification for an architecture is meant to be useful to anyone writing assembly, not just to people implementing the spec. Case in point x86 manuals aren't meant for Intel, they're meant for Intel's customers.
There is a lot of cope re the fact RISC-V's spec is particularly hard to use for writing assembly or understanding the software model.
If the spec isn't a 'manual' then where's the manual? If there's just no manual then that's a deficiency. If we only have 'tutorial's that's bad as well, a manual is a good reference for an experienced user, and approachable to a slightly aware beginner (or a fresh beginner with experience in other arch's); a tutorial is too verbose to be useful as a regular reference.
Either the spec should have read (and still could read) more like a useful manual, or a useful manual needs to be provided.
Not quite. It still is the same “process whatever number of items you can in parallel, decrease count by that, repeat if necessary“ loop.
RISC-V decided to move the “decrease count by that, repeat if necessary” part into hardware, making the entire phrase “how the hardware works”.
Makes for shorter and nicer assembly. SIMD without it first has to query the CPU to find out how much parallelization it can handle (once) and do the “decrease count by that, repeat if necessary” part on the main CPU.
IIRC libc for x64 has several implementations of memcpy/memmov/strlen/etc. for different SSE/AVX extensions, which all get compiled in and shipped to your system; when libc is loaded for the first time, it figures out what is the latest extension the CPU it's running on actually supports and then patches its exports to point to the fastest working implementations.
This kinda (though admittedly not entirely) balances out the x86 problem - sure, you have to write a new loop to take advantage of wider vector registers, but you often want to do that anyway - on SSE→AVX(2) you get to take advantage of non-destructive ops, all inline loads being unaligned, and a couple new nice instrs; on AVX2→AVX512 you get a ton of masking stuff, non-awful blends, among others.
RVV gets an advantage here largely due to just simply being a newer ISA, at a time where it is actually reasonably possible for even baseline hardware to support expensive compute instrs, complex shuffles, all unaligned mem ops (..though, actually, with RISC-V/RVV not mandating unaligned support (and allowing it to be extremely-slow even when supported) this is another thing you may want to write multiple loops for), and whatnot; whereas x86 SSE2 had to work on whatever could exist 20 years ago, and as such made respective compromises.
In some edge-cases the x86 approach can even be better - if you have some code that benefits from having different versions depending on hardware vector size (e.g. needs to use vrgather, or processes some fixed-size data that'd be really bad to write in a scalable way), on RVV you may end up needing to write a loop for each combination of VLEN and extension-set (i.e. a quadratic number of cases), whereas on x86 you only need to have a version of the loop for each desired extension-set.
I’d love a similar document for ARM NEON as well.
Also, where does that 38-byte stride even comes from? That number is not even divisible by 4, nevermind by 8!