v0.6.0
enhancement
- add coco_evaluation script, refactor coco_error_analysis script (#162)
coco_evaluation.py
script usage:
python scripts/coco_evaluation.py dataset.json results.json
will calculate coco evaluation and export them to given output folder directory.
If you want to specify mAP metric type, set it as --metric bbox mask
.
If you want to also calculate classwise scores add --classwise
argument.
If you want to specify max detections, set it as --proposal_nums 10 100 500
.
If you want to specify a psecific IOU threshold, set it as --iou_thrs 0.5
. Default includes 0.50:0.95
and 0.5
scores.
If you want to specify export directory, set it as --out_dir output/folder/directory
.
coco_error_analysis.py
script usage:
python scripts/coco_error_analysis.py dataset.json results.json
will calculate coco error plots and export them to given output folder directory.
If you want to specify mAP result type, set it as --types bbox mask
.
If you want to export extra mAP bar plots and annotation area stats add --extraplots
argument.
If you want to specify area regions, set it as --areas 1024 9216 10000000000
.
If you want to specify export directory, set it as --out_dir output/folder/directory
.
bugfixes
breaking changes
- refactor predict (#161)
By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add--no_sliced_pred
argument. If you don't want to perform standard prediction add--no_standard_pred
argument.