-
as the question says. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @gfwangjg, |
Beta Was this translation helpful? Give feedback.
-
as the question says. |
Beta Was this translation helpful? Give feedback.
-
Hi @gfwangjg, |
Beta Was this translation helpful? Give feedback.
Hi @gfwangjg,
I guess your question is if models coming from LightGBM can be used as an input to the FINN compiler?
We are using Brevitas as a frontend and FINN expects a QONNX model as an input. QONNX is based on ONNX but extends it in a specific (non-standard) way, including custom layer types and quantization. So, networks must be first quantized using Brevitas and exported to QONNX to be converted to FPGA accelerators. Please find more information on how the FINN flow works here: https://finn.readthedocs.io/en/latest/end_to_end_flow.html