YOLOX models for executorch
YOLOX models trained on COCO object detection (118k annotated images) at resolution 640x640. It was introduced in the paper YOLOX: Exceeding YOLO Series in 2021 by Zheng Ge et al. and first released in this repository.
The models in this repo have been exported to use with executorch
Here is an example of detections created with YOLOX nano and the executorch runtime:
The models are exported from the following standard models trained on COCO:
Standard Models.
| Model | size | mAPval 0.5:0.95 |
mAPtest 0.5:0.95 |
Speed V100 (ms) |
Params (M) |
FLOPs (G) |
weights |
|---|---|---|---|---|---|---|---|
| YOLOX-s | 640 | 40.5 | 40.5 | 9.8 | 9.0 | 26.8 | github |
| YOLOX-m | 640 | 46.9 | 47.2 | 12.3 | 25.3 | 73.8 | github |
| YOLOX-l | 640 | 49.7 | 50.1 | 14.5 | 54.2 | 155.6 | github |
| YOLOX-x | 640 | 51.1 | 51.5 | 17.3 | 99.1 | 281.9 | github |
| YOLOX-Darknet53 | 640 | 47.7 | 48.0 | 11.1 | 63.7 | 185.3 | github |
How to use
The models have been exported using the code from this PR. It includes instructions on how to export your model so it can be executed using the executorch runtime.
Example code on how to run inference:
import cv2
from executorch.runtime import Runtime
input_shape = (640,640) # (416,416) for tiny and nano
origin_img = cv2.imread("path/to/your/image.png")
img = cv2.resize(origin_img, input_shape)
runtime = Runtime.get()
method = runtime.load_program("path/to/model/yolox_s.pte").load_method("forward")
output = method.execute([torch.from_numpy(img).unsqueeze(0)])
output = [o.numpy() for o in output]
# Add postprocessing like NMS to transform to bounding boxes
How to export and use your own YOLOX model
Install the YOLOX project from here and follow these instructions:
Step1: Install executorch
run the following command to install onnxruntime:
pip install executorch
Convert Your Model to Executorch
First, you should move to by:
cd <YOLOX_HOME>
Then, you can:
- Convert a standard YOLOX model by -n:
python3 tools/export_executorch.py --output-name yolox_s.pte -n yolox-s -c yolox_s.pth
Notes:
-n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nano, yolox-tiny, yolov3]
-c: the model you have trained
To customize an input shape for onnx model, modify the following code in tools/export_executorch.py:
dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])
- Convert a standard YOLOX model by -f. When using -f, the above command is equivalent to:
python3 tools/export_executorch.py --output-name yolox_s.pte -f exps/default/yolox_s.py -c yolox_s.pth
- To convert your customized model, please use -f:
python3 tools/export_executorch.py --output-name your_yolox.pte -f exps/your_dir/your_yolox.py -c your_yolox.pth
Step3: Executorch Runtime Demo
Step1.
cd <YOLOX_HOME>/demo/executorch
Step2.
python3 executorch_inference.py -m <EXECUTORCH_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s 0.3 --input_shape 640,640
Notes:
- -m: your converted pte model
- -i: input_image
- -s: score threshold for visualization.
- --input_shape: should be consistent with the shape you used for executorch convertion.
Cite YOLOX
If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
[ImageTag]: ./example_output.png
