3030[ 🛠️Installation] ( https://mmrotate.readthedocs.io/en/latest/install.html ) |
3131[ 👀Model Zoo] ( docs/en/model_zoo.md ) |
3232[ 🤔Reporting Issues] ( https://github.com/open-mmlab/mmrotate/issues/new/choose )
33+
3334</div >
3435
3536## Introduction
@@ -43,31 +44,29 @@ The master branch works with **PyTorch 1.6+**.
4344
4445https://user-images.githubusercontent.com/10410257/154433305-416d129b-60c8-44c7-9ebb-5ba106d3e9d5.MP4
4546
46-
4747<details open >
4848<summary ><b >Major Features</b ></summary >
4949
50- * ** Support multiple angle representations**
50+ - ** Support multiple angle representations**
5151
5252 MMRotate provides three mainstream angle representations to meet different paper settings.
5353
54- * ** Modular Design**
54+ - ** Modular Design**
5555
5656 We decompose the rotated object detection framework into different components,
5757 which makes it much easy and flexible to build a new model by combining different modules.
5858
59- * ** Strong baseline and State of the art**
59+ - ** Strong baseline and State of the art**
6060
6161 The toolbox provides strong baselines and state-of-the-art methods in rotated object detection.
6262
6363</details >
6464
6565## Changelog
6666
67- ** 0.3.0 ** was released in 29/4 /2022:
67+ ** 0.3.1 ** was released in 6/6 /2022:
6868
69- - Support TorchServe (#160 )
70- - Support Rotated ATSS (CVPR'20) (#179 )
69+ - Support Rotated FCOS (#223 )
7170
7271Please refer to [ changelog.md] ( docs/en/changelog.md ) for details and release history.
7372
@@ -81,12 +80,11 @@ Please see [get_started.md](docs/en/get_started.md) for the basic usage of MMRot
8180We provide [ colab tutorial] ( demo/MMRotate_Tutorial.ipynb ) for beginners.
8281There are also tutorials:
8382
84- * [ learn the basics] ( docs/en/intro.md )
85- * [ learn the config] ( docs/en/tutorials/customize_config.md )
86- * [ customize dataset] ( docs/en/tutorials/customize_dataset.md )
87- * [ customize model] ( docs/en/tutorials/customize_models.md )
88- * [ useful tools] ( docs/en/tutorials/useful_toos.md )
89-
83+ - [ learn the basics] ( docs/en/intro.md )
84+ - [ learn the config] ( docs/en/tutorials/customize_config.md )
85+ - [ customize dataset] ( docs/en/tutorials/customize_dataset.md )
86+ - [ customize model] ( docs/en/tutorials/customize_models.md )
87+ - [ useful tools] ( docs/en/tutorials/useful_tools.md )
9088
9189## Model Zoo
9290
@@ -96,23 +94,24 @@ A summary can be found in the [Model Zoo](docs/en/model_zoo.md) page.
9694<details open >
9795<summary ><b >Supported algorithms:</b ></summary >
9896
99- * [x] [ Rotated RetinaNet-OBB/HBB] ( configs/rotated_retinanet/README.md ) (ICCV'2017)
100- * [x] [ Rotated FasterRCNN-OBB] ( configs/rotated_faster_rcnn/README.md ) (TPAMI'2017)
101- * [x] [ Rotated RepPoints-OBB] ( configs/rotated_reppoints/README.md ) (ICCV'2019)
102- * [x] [ RoI Transformer] ( configs/roi_trans/README.md ) (CVPR'2019)
103- * [x] [ Gliding Vertex] ( configs/gliding_vertex/README.md ) (TPAMI'2020)
104- * [x] [ Rotated ATSS-OBB] ( configs/rotated_atss/README.md ) (CVPR'2020)
105- * [x] [ CSL] ( configs/csl/README.md ) (ECCV'2020)
106- * [x] [ R<sup >3</sup >Det] ( configs/r3det/README.md ) (AAAI'2021)
107- * [x] [ S<sup >2</sup >A-Net] ( configs/s2anet/README.md ) (TGRS'2021)
108- * [x] [ ReDet] ( configs/redet/README.md ) (CVPR'2021)
109- * [x] [ Beyond Bounding-Box] ( configs/cfa/README.md ) (CVPR'2021)
110- * [x] [ Oriented R-CNN] ( configs/oriented_rcnn/README.md ) (ICCV'2021)
111- * [x] [ GWD] ( configs/gwd/README.md ) (ICML'2021)
112- * [x] [ KLD] ( configs/kld/README.md ) (NeurIPS'2021)
113- * [x] [ SASM] ( configs/sasm_reppoints/README.md ) (AAAI'2022)
114- * [x] [ KFIoU] ( configs/kfiou/README.md ) (arXiv)
115- * [x] [ G-Rep] ( configs/g_reppoints/README.md ) (stay tuned)
97+ - [x] [ Rotated RetinaNet-OBB/HBB] ( configs/rotated_retinanet/README.md ) (ICCV'2017)
98+ - [x] [ Rotated FasterRCNN-OBB] ( configs/rotated_faster_rcnn/README.md ) (TPAMI'2017)
99+ - [x] [ Rotated RepPoints-OBB] ( configs/rotated_reppoints/README.md ) (ICCV'2019)
100+ - [x] [ Rotated FCOS] ( configs/rotated_fcos/README.md ) (ICCV'2019)
101+ - [x] [ RoI Transformer] ( configs/roi_trans/README.md ) (CVPR'2019)
102+ - [x] [ Gliding Vertex] ( configs/gliding_vertex/README.md ) (TPAMI'2020)
103+ - [x] [ Rotated ATSS-OBB] ( configs/rotated_atss/README.md ) (CVPR'2020)
104+ - [x] [ CSL] ( configs/csl/README.md ) (ECCV'2020)
105+ - [x] [ R<sup >3</sup >Det] ( configs/r3det/README.md ) (AAAI'2021)
106+ - [x] [ S<sup >2</sup >A-Net] ( configs/s2anet/README.md ) (TGRS'2021)
107+ - [x] [ ReDet] ( configs/redet/README.md ) (CVPR'2021)
108+ - [x] [ Beyond Bounding-Box] ( configs/cfa/README.md ) (CVPR'2021)
109+ - [x] [ Oriented R-CNN] ( configs/oriented_rcnn/README.md ) (ICCV'2021)
110+ - [x] [ GWD] ( configs/gwd/README.md ) (ICML'2021)
111+ - [x] [ KLD] ( configs/kld/README.md ) (NeurIPS'2021)
112+ - [x] [ SASM] ( configs/sasm_reppoints/README.md ) (AAAI'2022)
113+ - [x] [ KFIoU] ( configs/kfiou/README.md ) (arXiv)
114+ - [x] [ G-Rep] ( configs/g_reppoints/README.md ) (stay tuned)
116115
117116</details >
118117
@@ -155,22 +154,22 @@ This project is released under the [Apache 2.0 license](LICENSE).
155154
156155## Projects in OpenMMLab
157156
158- * [ MMCV] ( https://github.com/open-mmlab/mmcv ) : OpenMMLab foundational library for computer vision.
159- * [ MIM] ( https://github.com/open-mmlab/mim ) : MIM installs OpenMMLab packages.
160- * [ MMClassification] ( https://github.com/open-mmlab/mmclassification ) : OpenMMLab image classification toolbox and benchmark.
161- * [ MMDetection] ( https://github.com/open-mmlab/mmdetection ) : OpenMMLab detection toolbox and benchmark.
162- * [ MMDetection3D] ( https://github.com/open-mmlab/mmdetection3d ) : OpenMMLab's next-generation platform for general 3D object detection.
163- * [ MMRotate] ( https://github.com/open-mmlab/mmrotate ) : OpenMMLab rotated object detection toolbox and benchmark.
164- * [ MMSegmentation] ( https://github.com/open-mmlab/mmsegmentation ) : OpenMMLab semantic segmentation toolbox and benchmark.
165- * [ MMOCR] ( https://github.com/open-mmlab/mmocr ) : OpenMMLab text detection, recognition, and understanding toolbox.
166- * [ MMPose] ( https://github.com/open-mmlab/mmpose ) : OpenMMLab pose estimation toolbox and benchmark.
167- * [ MMHuman3D] ( https://github.com/open-mmlab/mmhuman3d ) : OpenMMLab 3D human parametric model toolbox and benchmark.
168- * [ MMSelfSup] ( https://github.com/open-mmlab/mmselfsup ) : OpenMMLab self-supervised learning toolbox and benchmark.
169- * [ MMRazor] ( https://github.com/open-mmlab/mmrazor ) : OpenMMLab model compression toolbox and benchmark.
170- * [ MMFewShot] ( https://github.com/open-mmlab/mmfewshot ) : OpenMMLab fewshot learning toolbox and benchmark.
171- * [ MMAction2] ( https://github.com/open-mmlab/mmaction2 ) : OpenMMLab's next-generation action understanding toolbox and benchmark.
172- * [ MMTracking] ( https://github.com/open-mmlab/mmtracking ) : OpenMMLab video perception toolbox and benchmark.
173- * [ MMFlow] ( https://github.com/open-mmlab/mmflow ) : OpenMMLab optical flow toolbox and benchmark.
174- * [ MMEditing] ( https://github.com/open-mmlab/mmediting ) : OpenMMLab image and video editing toolbox.
175- * [ MMGeneration] ( https://github.com/open-mmlab/mmgeneration ) : OpenMMLab image and video generative models toolbox.
176- * [ MMDeploy] ( https://github.com/open-mmlab/mmdeploy ) : OpenMMLab model deployment framework.
157+ - [ MMCV] ( https://github.com/open-mmlab/mmcv ) : OpenMMLab foundational library for computer vision.
158+ - [ MIM] ( https://github.com/open-mmlab/mim ) : MIM installs OpenMMLab packages.
159+ - [ MMClassification] ( https://github.com/open-mmlab/mmclassification ) : OpenMMLab image classification toolbox and benchmark.
160+ - [ MMDetection] ( https://github.com/open-mmlab/mmdetection ) : OpenMMLab detection toolbox and benchmark.
161+ - [ MMDetection3D] ( https://github.com/open-mmlab/mmdetection3d ) : OpenMMLab's next-generation platform for general 3D object detection.
162+ - [ MMRotate] ( https://github.com/open-mmlab/mmrotate ) : OpenMMLab rotated object detection toolbox and benchmark.
163+ - [ MMSegmentation] ( https://github.com/open-mmlab/mmsegmentation ) : OpenMMLab semantic segmentation toolbox and benchmark.
164+ - [ MMOCR] ( https://github.com/open-mmlab/mmocr ) : OpenMMLab text detection, recognition, and understanding toolbox.
165+ - [ MMPose] ( https://github.com/open-mmlab/mmpose ) : OpenMMLab pose estimation toolbox and benchmark.
166+ - [ MMHuman3D] ( https://github.com/open-mmlab/mmhuman3d ) : OpenMMLab 3D human parametric model toolbox and benchmark.
167+ - [ MMSelfSup] ( https://github.com/open-mmlab/mmselfsup ) : OpenMMLab self-supervised learning toolbox and benchmark.
168+ - [ MMRazor] ( https://github.com/open-mmlab/mmrazor ) : OpenMMLab model compression toolbox and benchmark.
169+ - [ MMFewShot] ( https://github.com/open-mmlab/mmfewshot ) : OpenMMLab fewshot learning toolbox and benchmark.
170+ - [ MMAction2] ( https://github.com/open-mmlab/mmaction2 ) : OpenMMLab's next-generation action understanding toolbox and benchmark.
171+ - [ MMTracking] ( https://github.com/open-mmlab/mmtracking ) : OpenMMLab video perception toolbox and benchmark.
172+ - [ MMFlow] ( https://github.com/open-mmlab/mmflow ) : OpenMMLab optical flow toolbox and benchmark.
173+ - [ MMEditing] ( https://github.com/open-mmlab/mmediting ) : OpenMMLab image and video editing toolbox.
174+ - [ MMGeneration] ( https://github.com/open-mmlab/mmgeneration ) : OpenMMLab image and video generative models toolbox.
175+ - [ MMDeploy] ( https://github.com/open-mmlab/mmdeploy ) : OpenMMLab model deployment framework.
0 commit comments