Custom Op Clarification #1160
-
Hello, I am having a hard time following the new custom op implementation. Do the steps discussed in this thread still apply? I am trying to create some custom pre/post-processing operations for letterboxing/nms, respectively. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
The general idea discussed there should still apply and it is mostly the same functions/methods you have to implement, but it is distributed differently throughout the repository and some names changed: Since the last release there is a two-level approach to introducing new custom ops, so you have to implement at least two classes now. The first level must derive from the new You might also have to introduce the corresponding Also remember to register you custom op to the I think you have already seen and "liked" my [Squeeze] PR #1153: The actual back-end contribution there is rather trivial, but the "boilerplate" to get the operator working and integrate it with the rest of the framework should look similar for any custom op, so you could take some inspiration from there. It should also show you how to set up test-cases for your new operator. As I am not too familiar with your letterboxing/nms operator/use-case: Is there a standard ONNX operator or some composition of standard ONNX operators, or is this truly custom? If this can be expressed via standard ONNX operations, maybe you could show a graph of the operator pattern you are trying to implement? |
Beta Was this translation helpful? Give feedback.
-
Hi @iksnagreb, Thanks for your detailed response. I will take another look at your PR, thanks for the reminder! I am currently implementing a YOLO model where the letterboxing resizes and pads an image to preserve its aspect ratio and NMS selects the best overlapping bounding boxes. I am trying to implement something along these lines: reshape and nms (from docs) nodes. I have letterboxing (HLS) and nms (on PS) working during inference on PYNQ but I am looking to streamline the "build" process instead of manually stitching these together per build. Thanks again for your help! |
Beta Was this translation helpful? Give feedback.
The general idea discussed there should still apply and it is mostly the same functions/methods you have to implement, but it is distributed differently throughout the repository and some names changed: Since the last release there is a two-level approach to introducing new custom ops, so you have to implement at least two classes now. The first level must derive from the new
HWCustomOP
class and typically contains all the general shape and datatype handling and maybe calling into the Python or RTL simulation. The second level derives from your first level as well as either theHLSBackend
orRTLBackend
class to specialize how your operator is actually implemented. Here you must implement …