-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MultiShift, Masked shift and masked interleave #2431
base: master
Are you sure you want to change the base?
MultiShift, Masked shift and masked interleave #2431
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
AVX3_DL has _mm_multishift_epi64_epi8, _mm256_multishift_epi64_epi8, and _mm512_multishift_epi64_epi8 intrinsics that can be used to implement MultiShift on AVX3_DL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice implementation of MultiShift :) As John mentions, please add a TODO for the corresponding x86 _mm512_multishift so we can track that.
g3doc/quick_reference.md
Outdated
|
||
* `V`: `{u,i}64`, `VI`: `{u,i}8` \ | ||
<code>V **MultiShift**(V vals, VI indices)</code>: returns a | ||
vector with `(vals[i] >> indices[i*8+j]) & 0xff` in byte `j` of `r[i]` for each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's define r
, for example "vector r
".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
for(size_t i = 0; i < N; i++) { | ||
uint64_t shift_result = 0; | ||
for(int j = 0; j < 8; j++) { | ||
uint64_t rot_result = (v[i] >> indices[i*8+j]) | (v[i] << (64 - indices[i*8+j])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we're only using the lower 8 bits, we can just talk about the right-shift and not a rotate, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately not, in the cases where indices[x]
are greater than 55 you start to get the lower bits wrapping round.
i.e. For indices[x] = 59
the output bits are: 2, 1, 0, 63, 62, 61, 60, 59
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see - the x86 definition has & 63. Do we want to call it MultiRotateRight then?
g3doc/quick_reference.md
Outdated
|
||
#### Masked Shifts | ||
* `V`: `{u,i}` \ | ||
<code>V **MaskedShiftLeftOrZero**<int>(M mask, V a)</code> returns `a[i] << int` or `0` if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As before, let's drop the OrZero suffix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorted
g3doc/quick_reference.md
Outdated
@@ -2081,6 +2124,12 @@ Ops in this section are only available if `HWY_TARGET != HWY_SCALAR`: | |||
`InterleaveOdd(d, a, b)` is usually more efficient than `OddEven(b, | |||
DupOdd(a))`. | |||
|
|||
* <code>V **InterleaveEvenOrZero**(M m, V a, V b)</code>: Performs the same |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add Masked prefix and remove OrZero suffix? I notice this doesn't have an SVE implementation yet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed this, we'll add it to a future PR once we have an x86 implementation.
hwy/ops/generic_ops-inl.h
Outdated
@@ -574,6 +592,27 @@ HWY_API V MaskedSatSubOr(V no, M m, V a, V b) { | |||
} | |||
#endif // HWY_NATIVE_MASKED_ARITH | |||
|
|||
// ------------------------------ MaskedShift | |||
template <int kshift, class V, class M> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: please rename kshift -> kShift.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorted
hwy/ops/generic_ops-inl.h
Outdated
@@ -7299,6 +7338,62 @@ HWY_API V BitShuffle(V v, VI idx) { | |||
|
|||
#endif // HWY_NATIVE_BITSHUFFLE | |||
|
|||
// ------------------------------ MultiShift (Rol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ops in parentheses are ops that the implementation of the one in this section uses, for purposes of sorting the ops in the source file. I think this is a copy-paste remnant? We can remove it because this implementation does not seem to use any ops defined in generic_ops-inl.h.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that makes sense. I've corrected this.
hwy/ops/generic_ops-inl.h
Outdated
const auto extract_mask = Dup128VecFromValues(du8, 0, 2, 4, 6, 8, 10, 12, 14, | ||
0, 0, 0, 0, 0, 0, 0, 0); | ||
const auto even_lanes = | ||
BitCast(d64, TableLookupBytes(extracted_even_bytes, extract_mask)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could use ConcatEven here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a better way to do it. I've made that change.
70d6581
to
c4a44dd
Compare
Remove OrZero suffixes Remove masked interleaves. Will be in a future PR with x86 specialisations Correct template naming Correct multishift header comment Optimise MultiShift Add x86 TODOs for multishift Improve MultiShift docs
37e7368
to
1e907b0
Compare
for(size_t i = 0; i < N; i++) { | ||
uint64_t shift_result = 0; | ||
for(int j = 0; j < 8; j++) { | ||
uint64_t rot_result = (v[i] >> indices[i*8+j]) | (v[i] << (64 - indices[i*8+j])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see - the x86 definition has & 63. Do we want to call it MultiRotateRight then?
`mask[i]` is false. | ||
|
||
* `V`: `{u,i}` \ | ||
<code>V **MaskedShiftRightOr**<int>(V no, M mask, V a)</code> returns `a[i] >> int` or `no` if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor clarification: no[i] here and below?
|
||
#undef HWY_SVE_SHIFT_Z | ||
|
||
// ------------------------------ MaskedShiftRightSameOr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stale comment, it's now called MaskedShiftRightOr?
Introduces the following operations for
generic_ops-inl.h
andarm_sve-inl.h
where the use of sve intrinsics will provide a performance gain:g3doc/quick_reference.md
.Tests have been introduced for all the listed operations.