Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiShift, Masked shift and masked interleave #2431

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

mazimkhan
Copy link
Contributor

Introduces the following operations for generic_ops-inl.h and arm_sve-inl.h where the use of sve intrinsics will provide a performance gain:

  • MultiShift - An arbitrary set of shifts performed on each byte in each 64-bit wide lane of a vector. A fuller description can be found in g3doc/quick_reference.md.
  • Masked shifts - shifting operations where a mask is applied for alternative behaviour where the mask is false.
  • Masked interleave operations which interleave either the even or odd elements of the two input vectors, unless the mask is false for a given lane. In this case, the lane is set to zero.

Tests have been introduced for all the listed operations.

Copy link

google-cla bot commented Jan 6, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@johnplatts
Copy link
Contributor

AVX3_DL has _mm_multishift_epi64_epi8, _mm256_multishift_epi64_epi8, and _mm512_multishift_epi64_epi8 intrinsics that can be used to implement MultiShift on AVX3_DL.

Copy link
Member

@jan-wassenberg jan-wassenberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice implementation of MultiShift :) As John mentions, please add a TODO for the corresponding x86 _mm512_multishift so we can track that.


* `V`: `{u,i}64`, `VI`: `{u,i}8` \
<code>V **MultiShift**(V vals, VI indices)</code>: returns a
vector with `(vals[i] >> indices[i*8+j]) & 0xff` in byte `j` of `r[i]` for each
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's define r, for example "vector r".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

for(size_t i = 0; i < N; i++) {
uint64_t shift_result = 0;
for(int j = 0; j < 8; j++) {
uint64_t rot_result = (v[i] >> indices[i*8+j]) | (v[i] << (64 - indices[i*8+j]));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we're only using the lower 8 bits, we can just talk about the right-shift and not a rotate, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately not, in the cases where indices[x] are greater than 55 you start to get the lower bits wrapping round.

i.e. For indices[x] = 59 the output bits are: 2, 1, 0, 63, 62, 61, 60, 59

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see - the x86 definition has & 63. Do we want to call it MultiRotateRight then?


#### Masked Shifts
* `V`: `{u,i}` \
<code>V **MaskedShiftLeftOrZero**&lt;int&gt;(M mask, V a)</code> returns `a[i] << int` or `0` if
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As before, let's drop the OrZero suffix?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorted

@@ -2081,6 +2124,12 @@ Ops in this section are only available if `HWY_TARGET != HWY_SCALAR`:
`InterleaveOdd(d, a, b)` is usually more efficient than `OddEven(b,
DupOdd(a))`.

* <code>V **InterleaveEvenOrZero**(M m, V a, V b)</code>: Performs the same
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add Masked prefix and remove OrZero suffix? I notice this doesn't have an SVE implementation yet?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this, we'll add it to a future PR once we have an x86 implementation.

@@ -574,6 +592,27 @@ HWY_API V MaskedSatSubOr(V no, M m, V a, V b) {
}
#endif // HWY_NATIVE_MASKED_ARITH

// ------------------------------ MaskedShift
template <int kshift, class V, class M>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: please rename kshift -> kShift.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorted

@@ -7299,6 +7338,62 @@ HWY_API V BitShuffle(V v, VI idx) {

#endif // HWY_NATIVE_BITSHUFFLE

// ------------------------------ MultiShift (Rol)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ops in parentheses are ops that the implementation of the one in this section uses, for purposes of sorting the ops in the source file. I think this is a copy-paste remnant? We can remove it because this implementation does not seem to use any ops defined in generic_ops-inl.h.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that makes sense. I've corrected this.

const auto extract_mask = Dup128VecFromValues(du8, 0, 2, 4, 6, 8, 10, 12, 14,
0, 0, 0, 0, 0, 0, 0, 0);
const auto even_lanes =
BitCast(d64, TableLookupBytes(extracted_even_bytes, extract_mask));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could use ConcatEven here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a better way to do it. I've made that change.

@wbb-ccl wbb-ccl force-pushed the cc_up_shift_and_interleave branch from 70d6581 to c4a44dd Compare January 31, 2025 11:01
mazimkhan and others added 3 commits January 31, 2025 12:21
Remove OrZero suffixes
Remove masked interleaves. Will be in a future PR with x86 specialisations
Correct template naming
Correct multishift header comment
Optimise MultiShift
Add x86 TODOs for multishift
Improve MultiShift docs
@wbb-ccl wbb-ccl force-pushed the cc_up_shift_and_interleave branch from 37e7368 to 1e907b0 Compare January 31, 2025 12:21
for(size_t i = 0; i < N; i++) {
uint64_t shift_result = 0;
for(int j = 0; j < 8; j++) {
uint64_t rot_result = (v[i] >> indices[i*8+j]) | (v[i] << (64 - indices[i*8+j]));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see - the x86 definition has & 63. Do we want to call it MultiRotateRight then?

`mask[i]` is false.

* `V`: `{u,i}` \
<code>V **MaskedShiftRightOr**&lt;int&gt;(V no, M mask, V a)</code> returns `a[i] >> int` or `no` if
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor clarification: no[i] here and below?


#undef HWY_SVE_SHIFT_Z

// ------------------------------ MaskedShiftRightSameOr
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stale comment, it's now called MaskedShiftRightOr?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants