Skip to content

Commit c5bf348

Browse files
authored
Bump version to v0.1.1
Bump version to v0.1.1
2 parents 7c9a592 + 2a06273 commit c5bf348

File tree

83 files changed

+4841
-791
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+4841
-791
lines changed

.github/workflows/build.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ jobs:
198198
- name: Build and install
199199
run: pip install -e .
200200
- name: Run unittests
201-
run: coverage run --branch --source mmrotate -m pytest tests -sv
201+
run: coverage run --branch --source mmrotate -m pytest tests
202202
- name: Generate coverage report
203203
run: |
204204
coverage xml

README.md

Lines changed: 22 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,18 @@ https://user-images.githubusercontent.com/10410257/154433305-416d129b-60c8-44c7-
6262

6363
</details>
6464

65+
## Changelog
6566

67+
**0.1.1** was released in 14/3/2022:
68+
69+
- Add [colab tutorial](demo/MMRotate_Tutorial.ipynb) for beginners (#66)
70+
- Support [huge image inference](deom/huge_image_demo.py) (#34)
71+
- Support HRSC Dataset (#96)
72+
- Support mixed precision training (#72)
73+
- Add inference speed statistics [tool](tools/analysis_tools/benchmark.py) (#86)
74+
- Add confusion matrix analysis [tool](tools/analysis_tools/confusion_matrix.py) (#93)
75+
76+
Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
6677

6778
## Installation
6879

@@ -71,6 +82,7 @@ Please refer to [install.md](docs/en/install.md) for installation guide.
7182
## Get Started
7283

7384
Please see [get_started.md](docs/en/get_started.md) for the basic usage of MMRotate.
85+
We provide [colab tutorial](demo/MMRotate_Tutorial.ipynb) for beginners.
7486
There are also tutorials:
7587

7688
* [learn the basics](docs/en/intro.md)
@@ -145,21 +157,21 @@ This project is released under the [Apache 2.0 license](LICENSE).
145157
## Projects in OpenMMLab
146158

147159
* [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
148-
* [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.
160+
* [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
149161
* [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
150162
* [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
151-
* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab next-generation platform for general 3D object detection.
163+
* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
164+
* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
152165
* [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
153-
* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab next-generation action understanding toolbox and benchmark.
154-
* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
166+
* [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
155167
* [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
156-
* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
157-
* [MMOCR](https://github.com/open-mmlab/mmocr): A comprehensive toolbox for text detection, recognition and understanding.
158-
* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab next-generation toolbox for generative models.
159-
* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
160-
* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
161168
* [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
162169
* [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
163170
* [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
171+
* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
172+
* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
173+
* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
174+
* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
175+
* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
176+
* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
164177
* [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
165-
* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.

README_zh-CN.md

Lines changed: 26 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,27 @@ https://user-images.githubusercontent.com/10410257/154433305-416d129b-60c8-44c7-
5959

6060
</details>
6161

62+
## 更新日志
63+
64+
最新的 **0.1.1** 版本已经在 2022.03.14 发布:
65+
66+
- 为初学者添加了 [Colab 教程](demo/MMRotate_Tutorial.ipynb)
67+
- 支持了[大图推理](deom/huge_image_demo.py)
68+
- 支持了 HRSC 遥感数据集
69+
- 支持了混合精度训练
70+
- 添加了推理速度[统计工具](tools/analysis_tools/benchmark.py)
71+
- 添加了混淆矩阵[分析工具](tools/analysis_tools/confusion_matrix.py).
72+
73+
如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/en/changelog.md)
74+
6275
## 安装
6376

6477
请参考 [安装指南](docs/zh_cn/install.md) 进行安装。
6578

6679
## 教程
6780

6881
请参考 [get_started.md](docs/zh_cn/get_started.md) 了解 MMRotate 的基本使用。
82+
我们为初学者提供了 [colab 教程](demo/MMRotate_Tutorial.ipynb)
6983
MMRotate 也提供了其他更详细的教程:
7084

7185
* [学习基础知识](docs/zh_cn/intro.md)
@@ -141,23 +155,23 @@ MMRotate 是一款由不同学校和公司共同贡献的开源项目。我们
141155

142156
* [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
143157
* [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
144-
* [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱与测试基准
145-
* [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 检测工具箱与测试基准
146-
* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用3D目标检测平台
147-
* [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱与测试基准
148-
* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱与测试基准
149-
* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
150-
* [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱与测试基准
151-
* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
158+
* [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
159+
* [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
160+
* [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
161+
* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
162+
* [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
152163
* [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
153-
* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 新一代生成模型工具箱
154-
* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
155-
* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
164+
* [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱
156165
* [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
157166
* [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
158167
* [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
168+
* [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
169+
* [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
170+
* [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
171+
* [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
172+
* [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
173+
* [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
159174
* [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架
160-
* [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
161175

162176
## 欢迎加入 OpenMMLab 社区
163177

configs/_base_/datasets/hrsc.py

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# dataset settings
2+
dataset_type = 'HRSCDataset'
3+
data_root = 'data/hrsc/'
4+
img_norm_cfg = dict(
5+
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
6+
train_pipeline = [
7+
dict(type='LoadImageFromFile'),
8+
dict(type='LoadAnnotations', with_bbox=True),
9+
dict(type='RResize', img_scale=(800, 800)),
10+
dict(type='RRandomFlip', flip_ratio=0.5),
11+
dict(type='Normalize', **img_norm_cfg),
12+
dict(type='Pad', size_divisor=32),
13+
dict(type='DefaultFormatBundle'),
14+
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
15+
]
16+
test_pipeline = [
17+
dict(type='LoadImageFromFile'),
18+
dict(
19+
type='MultiScaleFlipAug',
20+
img_scale=(800, 800),
21+
flip=False,
22+
transforms=[
23+
dict(type='RResize'),
24+
dict(type='Normalize', **img_norm_cfg),
25+
dict(type='Pad', size_divisor=32),
26+
dict(type='DefaultFormatBundle'),
27+
dict(type='Collect', keys=['img'])
28+
])
29+
]
30+
data = dict(
31+
samples_per_gpu=2,
32+
workers_per_gpu=2,
33+
train=dict(
34+
type=dataset_type,
35+
classwise=False,
36+
ann_file=data_root + 'ImageSets/trainval.txt',
37+
ann_subdir=data_root + 'FullDataSet/Annotations/',
38+
img_subdir=data_root + 'FullDataSet/AllImages/',
39+
pipeline=train_pipeline),
40+
val=dict(
41+
type=dataset_type,
42+
classwise=False,
43+
ann_file=data_root + 'ImageSets/trainval.txt',
44+
ann_subdir=data_root + 'FullDataSet/Annotations/',
45+
img_subdir=data_root + 'FullDataSet/AllImages/',
46+
pipeline=test_pipeline),
47+
test=dict(
48+
type=dataset_type,
49+
classwise=False,
50+
ann_file=data_root + 'ImageSets/test.txt',
51+
ann_subdir=data_root + 'FullDataSet/Annotations/',
52+
img_subdir=data_root + 'FullDataSet/AllImages/',
53+
pipeline=test_pipeline))

configs/cfa/README.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,25 @@
1-
# [Beyond Bounding-Box: Convex-hull Feature Adaptation for Oriented and Densely Packed Object Detection.](https://openaccess.thecvf.com/content/CVPR2021/papers/Guo_Beyond_Bounding-Box_Convex-Hull_Feature_Adaptation_for_Oriented_and_Densely_Packed_CVPR_2021_paper.pdf)
1+
# CFA
2+
> [Beyond Bounding-Box: Convex-hull Feature Adaptation for Oriented and Densely Packed Object Detection.](https://openaccess.thecvf.com/content/CVPR2021/papers/Guo_Beyond_Bounding-Box_Convex-Hull_Feature_Adaptation_for_Oriented_and_Densely_Packed_CVPR_2021_paper.pdf)
23
34
<!-- [ALGORITHM] -->
5+
46
## Abstract
57

6-
![illustration](https://raw.githubusercontent.com/zytx121/image-host/main/imgs/cfa.png)
8+
<div align=center>
9+
<img src="https://raw.githubusercontent.com/zytx121/image-host/main/imgs/cfa.png" width="800"/>
10+
</div>
711

812
Detecting oriented and densely packed objects remains challenging for spatial feature aliasing caused by the intersection of reception fields between objects. In this paper, we propose a convex-hull feature adaptation (CFA) approach for configuring convolutional features in accordance with oriented and densely packed object layouts. CFA is rooted in convex-hull feature representation, which defines a set of dynamically predicted feature points guided by the convex intersection over union (CIoU) to bound the extent of objects. CFA pursues optimal feature assignment by constructing convex-hull sets and dynamically splitting positive or negative convex-hulls. By simultaneously considering overlapping convex-hulls and objects and penalizing convex-hulls shared by multiple objects, CFA alleviates spatial feature aliasing towards optimal feature adaptation. Experiments on DOTA and SKU110KR datasets show that CFA significantly outperforms the baseline approach, achieving new state-of-the-art detection performance.
913

1014
## Results and models
1115

12-
### DOTA1.0
16+
DOTA1.0
1317

14-
#### RepPoints
1518
| Backbone | mAP | Angle | lr schd | Mem (GB) | Inf Time (fps) | Aug | Batch Size | Configs | Download |
1619
|:------------:|:----------:|:-----------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|
17-
| ResNet50 (1024,1024,200) | 59.44 | oc | 1x | 3.45 | 15.9 | - | 2 | [rotated_reppoints_r50_fpn_1x_dota_oc](../rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc-d38ce217.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc_20220205_145010.log.json)
18-
| ResNet50 (1024,1024,200) | 69.63 | le135 | 1x | 3.45 | 15.7 | - | 2 | [cfa_r50_fpn_1x_dota_le135](./cfa_r50_fpn_1x_dota_le135.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_1x_dota_le135/cfa_r50_fpn_1x_dota_le135-aed1cbc6.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_1x_dota_le135/cfa_r50_fpn_1x_dota_le135_20220205_144859.log.json)
19-
| ResNet50 (1024,1024,200) | 73.45 | oc | 40e | 3.45 | 15.7 | - | 2 | [cfa_r50_fpn_40e_dota_oc](./cfa_r50_fpn_40e_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_40e_dota_oc/cfa_r50_fpn_40e_dota_oc-2f387232.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_40e_dota_oc/cfa_r50_fpn_40e_dota_oc_20220209_171237.log.json)
20+
| ResNet50 (1024,1024,200) | 59.44 | oc | 1x | 3.45 | 15.6 | - | 2 | [rotated_reppoints_r50_fpn_1x_dota_oc](../rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc-d38ce217.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc_20220205_145010.log.json)
21+
| ResNet50 (1024,1024,200) | 69.63 | le135 | 1x | 3.45 | 16.1 | - | 2 | [cfa_r50_fpn_1x_dota_le135](./cfa_r50_fpn_1x_dota_le135.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_1x_dota_le135/cfa_r50_fpn_1x_dota_le135-aed1cbc6.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_1x_dota_le135/cfa_r50_fpn_1x_dota_le135_20220205_144859.log.json)
22+
| ResNet50 (1024,1024,200) | 73.45 | oc | 40e | 3.45 | 16.1 | - | 2 | [cfa_r50_fpn_40e_dota_oc](./cfa_r50_fpn_40e_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_40e_dota_oc/cfa_r50_fpn_40e_dota_oc-2f387232.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/cfa/cfa_r50_fpn_40e_dota_oc/cfa_r50_fpn_40e_dota_oc_20220209_171237.log.json)
2023

2124

2225
## Citation

configs/g_reppoints/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
# G-Rep: Gaussian Representation for Arbitrary-Oriented Object Detection.
1+
# G-Rep
2+
> > G-Rep: Gaussian Representation for Arbitrary-Oriented Object Detection.
23
34
<!-- [ALGORITHM] -->
45
## Abstract
@@ -7,13 +8,12 @@ Core code will release later.
78

89
## Results and models
910

10-
### DOTA1.0
11+
DOTA1.0
1112

12-
#### RepPoints
1313
| Backbone | mAP | Angle | lr schd | Mem (GB) | Inf Time (fps) | Aug | Batch Size | Configs | Download |
1414
|:------------:|:----------:|:-----------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|
15-
| ResNet50 (1024,1024,200) | 59.44 | oc | 1x | 3.45 | 15.9 | - | 2 | [rotated_reppoints_r50_fpn_1x_dota_oc](../rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc-d38ce217.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc_20220205_145010.log.json)
16-
| ResNet50 (1024,1024,200) | 69.49 | le135 | 1x | 4.05 | 10.5 | - | 2 | [g_reppoints_r50_fpn_1x_dota_le135](./g_reppoints_r50_fpn_1x_dota_le135.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/g_reppoints/g_reppoints_r50_fpn_1x_dota_le135/g_reppoints_r50_fpn_1x_dota_le135-b840eed7.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/g_reppoints/g_reppoints_r50_fpn_1x_dota_le135/g_reppoints_r50_fpn_1x_dota_le135_20220202_233631.log.json)
15+
| ResNet50 (1024,1024,200) | 59.44 | oc | 1x | 3.45 | 15.6 | - | 2 | [rotated_reppoints_r50_fpn_1x_dota_oc](../rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc-d38ce217.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/rotated_reppoints/rotated_reppoints_r50_fpn_1x_dota_oc/rotated_reppoints_r50_fpn_1x_dota_oc_20220205_145010.log.json)
16+
| ResNet50 (1024,1024,200) | 69.49 | le135 | 1x | 4.05 | 8.6 | - | 2 | [g_reppoints_r50_fpn_1x_dota_le135](./g_reppoints_r50_fpn_1x_dota_le135.py) | [model](https://download.openmmlab.com/mmrotate/v0.1.0/g_reppoints/g_reppoints_r50_fpn_1x_dota_le135/g_reppoints_r50_fpn_1x_dota_le135-b840eed7.pth) &#124; [log](https://download.openmmlab.com/mmrotate/v0.1.0/g_reppoints/g_reppoints_r50_fpn_1x_dota_le135/g_reppoints_r50_fpn_1x_dota_le135_20220202_233631.log.json)
1717

1818

1919
## Citation

0 commit comments

Comments
 (0)