-
Notifications
You must be signed in to change notification settings - Fork 146
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add Grouped Convolution and GEMM documentation (#1719)
* Add Grouped Convolution docs * Add gemm docs * Update docs * fix
- Loading branch information
Showing
5 changed files
with
324 additions
and
16 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,126 @@ | ||
[Back to supported operations](../../../include/ck/README.md) | ||
# Composable Kernel GEMM | ||
|
||
## GEMM | ||
General matrix multiplications operation. In CK GEMM operation is called as `DeviceGemm` and requires following types as template parameters: | ||
|
||
* **ALayout** - A matrix layout (RowMajor/ColumnMajor). | ||
* **BLayout** - B matrix layout (RowMajor/ColumnMajor). | ||
* **CLayout** - B matrix layout (RowMajor/ColumnMajor). | ||
* **ADataType** - A matrix data type. | ||
* **BDataType** - B matrix data type. | ||
* **CDataType** - B matrix data type. | ||
* **AElementwiseOperation** - Fused operation on tensor A before GEMM. | ||
* **BElementwiseOperation** - Fused operation on tensor B before GEMM. | ||
* **CElementwiseOperation** - Fused operation on tensor C after GEMM. | ||
|
||
For matrices with large K dimension `DeviceGemmSplitK` implementation is available. This implementation allows user to split K dimension between work groups. This implementation uses `AtomicAdd` operation on global memory, thus need to zero-out output buffer for correct results. | ||
|
||
For fused operations with additional tensor there are `DeviceGemmMultipleABD` or `DeviceGemmMultipleD` operation which require following parameters: | ||
* **DsLayout** - layouts for additional tensors for fused operations. | ||
* **DsDataType** - data types for additional tensors for fused operations. | ||
|
||
For `DeviceGemmMultipleABD` **ALayout**, **BLayout**, **ADataType** and **BDataType** user should pass a tuple. | ||
|
||
List of the device operations in CK: | ||
|
||
* **DeviceGemmDl** - Device operation with DL instructions. | ||
* **DeviceGemmDpp** - Device operation with DL instructions with DPP instructions during data load. | ||
* **DeviceGemmWmma_CShuffle** - Device operation with WMMA instructions with CShuffle optimization for more optimized data store. | ||
* **DeviceGemm_Xdl_CShuffle_LdsDirectLoad** - Device operation with XDL instructions and CShuffle optimization for more optimized data store and direct load from global memory to shared memory. | ||
* **DeviceGemm_Xdl_CShuffle** - Device operation with XDL instructions with CShuffle optimization for more optimized data store. | ||
* **DeviceGemm_Xdl_CShuffleV2** - Device operation with XDL instructions with CShuffle optimization for more optimized data store. GEMM pipeline has been optimized compared to **DeviceGemm_Xdl_CShuffle**. | ||
* **DeviceGemmXdlSkipBLds** - Device operation with XDL instructions. Load to shared memory has been skiped for B matrix. | ||
* **DeviceGemm_Xdl_WaveletModel_CShuffle** - Device operation with XDL instructions with CShuffle optimization for more optimized data store. Producer and consumer scheme cooperation between waves in workgroup. | ||
* **DeviceGemmXdl** - Device operation with XDL instructions. | ||
|
||
Table of supported cases by instance factory with XDL instruction for Row/Row/Row, Row/Column/Row, Column/Row/Row or Column/Column/Row: | ||
|
||
| |Is supported| | ||
|-------|---| | ||
|bf16|✓| | ||
|fp16|✓| | ||
|fp32|✓| | ||
|int8|✓| | ||
|fp8 |✓| | ||
|
||
Table of supported cases by instance factory with WMMA instruction for Row/Row/Row, Row/Column/Row, Column/Row/Row or Column/Column/Row: | ||
|
||
| |Is supported| | ||
|-------|---| | ||
|bf16|✓| | ||
|fp16|✓| | ||
|fp32|✗| | ||
|int8|✓| | ||
|fp8 |✗| | ||
|
||
Table of supported cases by instance factory with DL instruction for Row/Row/Row, Row/Column/Row, Column/Row/Row or Column/Column/Row: | ||
|
||
| |Is supported| | ||
|-------|---| | ||
|bf16|✗| | ||
|fp16|✓| | ||
|fp32|✓| | ||
|int8|✓| | ||
|fp8 |✗| | ||
|
||
Table of supported cases by instance factory with fused output elementwise operation: | ||
|
||
* **B Matrix Multiply + Add + Gelu** - bf16 (int8 for B matrix) | ||
* **B Matrix Multiply + Add** - bf16 (int8 for B matrix) | ||
* **B Matrix Multiply + Gelu** - bf16 (int8 for B matrix) | ||
* **B Matrix Multiply** - bf16 (int8 for B matrix) | ||
|
||
* **Add + Add + Gelu** - fp16 | ||
* **Add + Gelu** - fp16, bf16 (int8 for B matrix) for Row/Column/Row | ||
* **Multiply** - fp16 | ||
* **Add + Multiply** - fp16 | ||
* **Add + Relu** - fp16 (int8 for B matrix) for Row/Column/Row, bf16 (int8 for B matrix) for Row/Column/Row | ||
* **Add + Silu** - fp16 (int8 for B matrix) for Row/Column/Row, bf16 (int8 for B matrix) for Row/Column/Row | ||
* **Add** - fp16 (int8 for B matrix) for Row/Column/Row, bf16 (int8 for B matrix) for Row/Column/Row | ||
* **Bilinear** - fp16, int8 | ||
* **Gelu** - fp16 | ||
* **Multiply + Add** - fp16 for Row/Column/Row and Row/Row/Row, fp16 (int8 for B matrix, fp32 for Bias) for Row/Column/Row and Row/Row/Row, | ||
* **Quantization** - int8 | ||
|
||
## GEMM V2 (Universal GEMM) | ||
General matrix multiplications operation optimized for MI300 series. Operation is called as `DeviceGemmV2` and requires following types as template parameters: | ||
|
||
* **ALayout** - A matrix layout (RowMajor/ColumnMajor). | ||
* **BLayout** - B matrix layout (RowMajor/ColumnMajor). | ||
* **CLayout** - B matrix layout (RowMajor/ColumnMajor). | ||
* **ADataType** - A matrix data type. | ||
* **BDataType** - B matrix data type. | ||
* **CDataType** - B matrix data type. | ||
* **AElementwiseOperation** - Fused operation on tensor A before GEMM. | ||
* **BElementwiseOperation** - Fused operation on tensor B before GEMM. | ||
* **CElementwiseOperation** - Fused operation on tensor C after GEMM. | ||
|
||
This implementation allows user to split K dimension between work groups. This implementation requires AtomicAdd operation on global memory (output buffer must be set to zeroes if splitK parameter is larger than one). | ||
|
||
List of the device operations for in CK: | ||
|
||
* **DeviceGemm_Xdl_CShuffleV3** - Device operation with XDL instructions with CShuffle optimization for more optimized data store. | ||
* **DeviceGemm_Xdl_CShuffleV3R1** - Device operation with XDL instructions with CShuffle optimization for more optimized data store. This implementation perform reduction on splitted K dimension after GEMM instead of AtomicAdd instruction. | ||
|
||
Table of supported cases by instance factory with XDL instruction for Row/Row/Row, Row/Column/Row, Column/Row/Row or Column/Column/Row: | ||
|
||
| |Is supported| | ||
|-------|---| | ||
|bf16|✓| | ||
|fp16|✓| | ||
|fp32|✗| | ||
|int8|✗| | ||
|fp8 (C bf16)|✓| | ||
|fp16 (A fp8)|✓| | ||
|fp16 (B fp8)|✓| | ||
|
||
## Others | ||
|
||
* **DeviceGemm_dequantB** - GEMM with dequantization (implemented with WMMA instructions). | ||
* **DeviceGemmMultipleD_ABScale** - GEMM with scale for A and B matrix. | ||
* **DeviceGemmMultipleDLayernorm** - GEMM fused with layernorm. | ||
* **DeviceGemmMultipleDMultipleR** - GEMM fused with reductions and custom global reductions operators. | ||
* **DeviceGemmReduce** - GEMM fused with reduction. | ||
* **DeviceGemm_Streamk_V2** - GEMM stream K implementation. Implementation allows to use reduction instead of AtomicAdd. | ||
* **DeviceGemmStreamK** - GEMM stream K implementation using AtomicAdd. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
[Back to supported operations](../../../include/ck/README.md) | ||
# Composable Kernel Grouped Convolution | ||
|
||
## Grouped Convolution Forward | ||
Grouped convolution operation for 1D, 2D or 3D spatial dimensions. Convolution utilizes GEMM kernel after tensor coordinate transform. In CK Grouped Convolution Forward operation is called as `DeviceGroupedConvFwdMultipleABD` and requires following types as template parameters: | ||
|
||
* **NumDimSpatial** - number of spatial dimensions (1D, 2D, 3D). | ||
* **InLayout** - input layout (NHWGC, GNHWC, NGCHW). | ||
* **WeiLayout** - weight layout (GKYXC). | ||
* **DsLayout** - layouts for additional tensors for fused operations. | ||
* **OutLayout** - output layout (NHWGK, GNHWK, NGKHW). | ||
* **ADataType** - input data type. Pass tuple if there is fused operation with input. | ||
* **BDataType** - weight data type. Pass tuple if there is fused operation with weight. | ||
* **DsDataType** - data types for additional tensors for fused operations. | ||
* **EDataType** - Output data type. | ||
* **AElementwiseOperation** - fused operation on tensor A (input). | ||
* **BElementwiseOperation** - fused operation on tensor B (weight). | ||
* **CDEElementwiseOperation** - fused operation on tensor C (output). | ||
* **AComputeType** - compute data type of tensor A for mfma instruction (ADataType by default). | ||
* **BComputeType** - compute data type of tensor B for mfma instruction (AComputeType by default). | ||
|
||
Grouped convolution forward support tensors larger than 2GB. | ||
|
||
List of the device operations for grouped convolution forward in CK: | ||
|
||
* **DeviceGroupedConvFwdMultipleABD_Xdl_CShuffle_V3** - Device operation with XDL instructions. Optimized for AMD Instinct MI300 series. | ||
* **DeviceGroupedConvFwdMultipleABD_Xdl_CShuffle** - Device operation with XDL instructions and support of fused operations to input, weight and output. | ||
* **DeviceGroupedConvFwdMultipleD_Wmma_CShuffle** - Device operation with WMMA instructions. | ||
* **DeviceGroupedConvFwdDlMultipleD_NHWC_KYXC_NHWK** - Device operation with DL instructions. | ||
|
||
Table of supported cases by instance factory with XDL instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|bf16 |2D, 3D|2D|1D, 2D, 3D| | ||
|fp16 |2D, 3D|2D|1D, 2D, 3D| | ||
|fp32 |2D, 3D|2D|1D, 2D, 3D| | ||
|int8 |2D, 3D|2D|1D, 3D| | ||
|fp8 |3D|✗|✗| | ||
|bf8 |3D|✗|✗| | ||
|
||
Table of supported cases by instance factory with WMMA instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|fp16 |2D, 3D|✗|2D, 3D| | ||
|int8 |2D, 3D|✗|2D, 3D| | ||
|
||
Table of supported cases by instance factory with DL instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|bf16 |✗|✗|2D| | ||
|fp16 |✗|✗|2D| | ||
|fp32 |✗|✗|2D| | ||
|int8 |✗|✗|2D| | ||
|
||
Table of supported cases by instance factory with fused elementwise operation: | ||
|
||
* **Dynamic elementwise operation** - 2D/3D, NHWGC, bf16/fp16/fp32/int8 | ||
* **Bilinear** - 3D, NHWGC, bf16/fp16/fp32/int8 | ||
* **ConvInvScale** - 3D, NHWGC, fp8 | ||
* **ConvScale** - 3D, NHWGC, fp8/bf8 | ||
* **ConvScale + Add** - 3D, NHWGC, fp8 | ||
* **ConvScale + Relu** - 3D, NHWGC, fp8 | ||
* **Scale** - 3D, NHWGC, bf16/fp16/fp32/int8 | ||
* **Scale + Add (for A and B)** - 3D, NHWGC, bf16/fp16/fp32/int8 | ||
* **Scale + Add + Scale + Add + Relu** - 3D, NHWGC, bf16/fp16/fp32/int8 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
[Back to supported operations](../../../include/ck/README.md) | ||
# Composable Kernel Grouped Convolution | ||
|
||
## Grouped Convolution Backward Data | ||
|
||
Grouped convolution operation for 1D, 2D or 3D spatial dimensions. Convolution utilizes GEMM kernel after tensor coordinate transform. In CK Grouped Convolution Backward Data operation is called as `DeviceGroupedConvBwdDataMultipleD` and requires following types as template parameters: | ||
|
||
* **NumDimSpatial** - number of spatial dimensions (1D, 2D, 3D). | ||
* **ALayout** - output layout (NHWGK, GNHWK, NGKHW). | ||
* **BLayout** - weight layout (GKYXC). | ||
* **DsLayout** - layouts for additional tensors for fused operations. | ||
* **ELayout** - input layout (NHWGC, GNHWC, NGCHW). | ||
* **ADataType** - output data type. | ||
* **BDataType** - weight data type. | ||
* **DsDataType** - data types for additional tensors for fused operations. | ||
* **EDataType** - input data type. | ||
* **AElementwiseOperation** - fused operation on tensor A (output). | ||
* **BElementwiseOperation** - fused operation on tensor B (weight). | ||
* **CDEElementwiseOperation** - fused operation on tensor C (input). | ||
* **AComputeType** - compute data type of tensor A for mfma instruction (ADataType by default). | ||
* **BComputeType** - compute data type of tensor B for mfma instruction (AComputeType by default). | ||
|
||
Grouped convolution backward data supports tensors larger than 2GB (except when image is larger than 2GB). | ||
|
||
List of the device operations for grouped convolution backward data in CK: | ||
|
||
* **DeviceGroupedConvBwdDataMultipleD_Xdl_CShuffle_v1** - Device operation with XDL instructions and support of fused operations to input. | ||
* **DeviceGroupedConvBwdDataMultipleD_Wmma_CShuffle** - Device operation with WMMA instructions. | ||
|
||
Table of supported cases by instance factory with XDL instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|bf16|2D, 3D|✗|2D, 3D| | ||
|fp16 |2D, 3D|✗|2D, 3D| | ||
|fp32 |2D, 3D|✗|2D, 3D| | ||
|
||
Table of supported cases by instance factory with WMMA instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|fp16 |2D, 3D|✗|2D, 3D| | ||
|int8 |2D, 3D|✗|2D, 3D| | ||
|
||
Table of supported cases by instance factory with fused elementwise operation: | ||
|
||
* **Bilinear** - 3D, NHWGC, bf16/fp16/fp32 | ||
* **Scale** - 3D, NHWGC, bf16/fp16/fp32 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
[Back to supported operations](../../../include/ck/README.md) | ||
# Composable Kernel Grouped Convolution | ||
|
||
## Grouped Convolution Backward Weight | ||
|
||
Grouped convolution operation for 1D, 2D or 3D spatial dimensions. Convolution utilizes GEMM kernel after tensor coordinate transform. Backward weight version uses splitK feature (due to large GEMM K dimension). In CK Grouped Convolution Backward Weight operation is called as `DeviceGroupedConvBwdWeight` and requires following types as template parameters: | ||
|
||
* **NumDimSpatial** - number of spatial dimensions (1D, 2D, 3D). | ||
* **InLayout** - input layout (NHWGC, GNHWC, NGCHW). | ||
* **WeiLayout** - weight layout (GKYXC). | ||
* **OutLayout** - output layout (NHWGK, GNHWK, NGKHW). | ||
* **InDataType** - input data type. | ||
* **WeiDataType** - weight data type. | ||
* **OutDataType** - output data type. | ||
* **InElementwiseOperation** - fused operation on tensor input. | ||
* **WeiElementwiseOperation** - fused operation on tensor weight. | ||
* **OutElementwiseOperation** - fused operation on tensor output. | ||
* **ComputeTypeA** - compute data type of tensor A for mfma instruction (ADataType by default). | ||
* **ComputeTypeB** - compute data type of tensor B for mfma instruction (ComputeTypeA by default). | ||
|
||
For fused operations with additional tensor there is `DeviceGroupedConvBwdWeightMultipleD` operation which requires following parameters: | ||
* **DsLayout** - layouts for additional tensors for fused operations. | ||
* **DsDataType** - data types for additional tensors for fused operations. | ||
|
||
Grouped convolution backward weight doesn't supports tensors larger than 2GB. | ||
|
||
List of the device operations for grouped convolution backward weight in CK: | ||
|
||
* **DeviceGroupedConvBwdWeight_Xdl_CShuffle** - Device operation with XDL instructions. | ||
* **DeviceGroupedConvBwdWeightTwoStage_Xdl_CShuffle** - Device operation with XDL instructions. Optimized for small C or K. | ||
* **DeviceGroupedConvBwdWeight_Wmma_CShuffle** - Device operation with WMMA instructions. | ||
* **DeviceGroupedConvBwdWeightMultipleD_Xdl_CShuffle** - Device operation with XDL instructions and support of fused operations to output. | ||
* **DeviceGroupedConvBwdWeight_Dl** - Device operation with DL instructions. | ||
|
||
Table of supported cases by instance factory with XDL instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|bf16|2D, 3D|✗|✗| | ||
|bf16(fp32 for weight)|2D, 3D|✗|1D, 2D, 3D| | ||
|fp16 |2D, 3D|✗|1D, 2D, 3D| | ||
|fp32 |2D, 3D|✗|1D, 2D, 3D| | ||
|
||
Table of supported cases by instance factory with WMMA instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|fp16 |3D|✗|3D| | ||
|int8 |3D|✗|3D| | ||
|
||
Table of supported cases by instance factory with DL instruction: | ||
|
||
| |NHWGC/GKYXC/NHWGK|NGCHW/GKYXC/NGKHW|GNHWC/GKYXC/GNHWK| | ||
|-------|---|---|---| | ||
|bf16(fp32 for weight)|1D, 2D, 3D|✗|1D, 2D, 3D| | ||
|fp16 |1D, 2D, 3D|✗|1D, 2D, 3D| | ||
|fp32 |1D, 2D, 3D|✗|1D, 2D, 3D| | ||
|
||
Table of supported cases by instance factory with fused elementwise operation: | ||
|
||
* **Bilinear** - 3D, NHWGC, bf16(fp32 for weight)/fp16/fp32 | ||
* **Scale** - 3D, NHWGC, bf16(fp32 for weight)/fp16/fp32 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,23 @@ | ||
[Back to the main page](../../README.md) | ||
# Composable Kernel supported operations | ||
## Supported device operations | ||
* [Average pooling]() | ||
* [Batched contraction]() | ||
* [Batched gemm]() | ||
* [Batchnorm]() | ||
* [CGEMM]() | ||
* [Contraction]() | ||
* [Convolution]() | ||
* [Image to Column and Column to Image]() | ||
* [Elementwise]() | ||
* [GEMM]() | ||
* [Max pooling]() | ||
* [Reduce]() | ||
* [Normalization]() | ||
* [Permute]() | ||
* [Put]() | ||
* [Softmax]() | ||
<!-- * [Average pooling](../../docs/markdown/tensor_operation/average_pooling.md) --> | ||
<!-- * [Batched contraction](../../docs/markdown/tensor_operation/batched_contraction.md) --> | ||
<!-- * [Batched gemm](../../docs/markdown/tensor_operation/batched_gemm.md) --> | ||
<!-- * [Batchnorm](../../docs/markdown/tensor_operation/batchnorm.md) --> | ||
<!-- * [CGEMM](../../docs/markdown/tensor_operation/cgemm.md) --> | ||
<!-- * [Contraction](../../docs/markdown/tensor_operation/contraction.md) --> | ||
<!-- * [Convolution](../../docs/markdown/tensor_operation/convolution.md) --> | ||
<!-- * [Elementwise](../../docs/markdown/tensor_operation/elementwise.md) --> | ||
* [GEMM](../../client_example/01_gemm/README.md) | ||
* [Grouped Convolution Forward](../../client_example/07_grouped_convnd_fwd/README.md) | ||
* [Grouped Convolution Backward Data](../../client_example/10_grouped_convnd_bwd_data/README.md) | ||
* [Grouped Convolution Backward Weight](../../client_example/11_grouped_conv_bwd_weight/README.md) | ||
<!-- * [Grouped GEMM](../../docs/markdown/tensor_operation/grouped_gemm.md) --> | ||
<!-- * [Image to Column and Column to Image](../../docs/markdown/tensor_operation/img2col.md) --> | ||
<!-- * [Max pooling](../../docs/markdown/tensor_operation/max_pooling.md) --> | ||
<!-- * [Reduce](../../docs/markdown/tensor_operation/reduce.md) --> | ||
<!-- * [Normalization](../../docs/markdown/tensor_operation/normalization.md) --> | ||
<!-- * [Permute](../../docs/markdown/tensor_operation/permute.md) --> | ||
<!-- * [Put](../../docs/markdown/tensor_operation/put.md) --> | ||
<!-- * [Softmax](../../docs/markdown/tensor_operation/softmax.md) --> |