oriented bounding boxes
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 8)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Hao Jiang ◽  
Siqi Wang ◽  
Huikun Bi ◽  
Xiaolei Lv ◽  
Binqiang Zhao ◽  
...  

Synthesizing indoor scene layouts is challenging and critical, especially for digital design and gaming entertainment. Although there has been significant research on the indoor layout synthesis of rectangular-shaped or L-shaped architecture, there is little known about synthesizing plausible layouts for more complicated indoor architecture with both geometric and semantic information of indoor architecture being fully considered. In this paper, we propose an effective and novel framework to synthesize plausible indoor layouts in various and complicated architecture. The given indoor architecture is first encoded to our proposed representation, called InAiR, based on its geometric and semantic information. The indoor objects are grouped and then arranged by functional blocks, represented by oriented bounding boxes, using dynamic convolution networks based on their functionality and human activities. Through comparisons with other approaches as well as comparative user studies, we find that our generated indoor scene layouts for diverse, complicated indoor architecture are visually indistinguishable, which reach state-of-the-art performance.


2020 ◽  
Vol 12 (19) ◽  
pp. 3262
Author(s):  
Bo Zhong ◽  
Kai Ao

Oriented object detection has received extensive attention in recent years, especially for the task of detecting targets in aerial imagery. Traditional detectors locate objects by horizontal bounding boxes (HBBs), which may cause inaccuracies when detecting objects with arbitrary oriented angles, dense distribution and a large aspect ratio. Oriented bounding boxes (OBBs), which add different rotation angles to the horizontal bounding boxes, can better deal with the above problems. New problems arise with the introduction of oriented bounding boxes for rotation detectors, such as an increase in the number of anchors and the sensitivity of the intersection over union (IoU) to changes of angle. To overcome these shortcomings while taking advantage of the oriented bounding boxes, we propose a novel rotation detector which redesigns the matching strategy between oriented anchors and ground truth boxes. The main idea of the new strategy is to decouple the rotating bounding box into a horizontal bounding box during matching, thereby reducing the instability of the angle to the matching process. Extensive experiments on public remote sensing datasets including DOTA, HRSC2016 and UCAS-AOD demonstrate that the proposed approach achieves state-of-the-art detection accuracy with higher efficiency.


Author(s):  
N. Mo ◽  
L. Yan

Abstract. Vehicles usually lack detailed information and are difficult to be trained on the high-resolution remote sensing images because of small size. In addition, vehicles contain multiple fine-grained categories that are slightly different, randomly located and oriented. Therefore, it is difficult to locate and identify these fine categories of vehicles. Considering the above problems in high-resolution remote sensing images, this paper proposes an oriented vehicle detection approach. First of all, we propose an oversampling and stitching method to augment the training dataset by increasing the frequency of objects with fewer training samples in order to balance the number of objects in each fine-grained vehicle category. Then considering the effect of the pooling operations on representing small objects, we propose to improve the resolution of feature maps so that detailed information hidden in feature maps can be enriched and they can better distinguish the fine-grained vehicle categories. Finally, we design a joint training loss function for horizontal and oriented bounding boxes with center loss, to decrease the impact of small between-class diversity on vehicle detection. Experimental verification is performed on the VEDAI dataset consisting of 9 fine-grained vehicle categories so as to evaluate the proposed framework. The experimental results show that the proposed framework performs better than most of competitive approaches in terms of a mean average precision of 60.7% and 60.4% in detecting horizontal and oriented bounding boxes respectively.


2020 ◽  
Vol 12 (16) ◽  
pp. 2558 ◽  
Author(s):  
Nan Mo ◽  
Li Yan

Vehicles in aerial images are generally with small sizes and unbalanced number of samples, which leads to the poor performances of the existing vehicle detection algorithms. Therefore, an oriented vehicle detection framework based on improved Faster RCNN is proposed for aerial images. First of all, we propose an oversampling and stitching data augmentation method to decrease the negative effect of category imbalance in the training dataset and construct a new dataset with balanced number of samples. Then considering that the pooling operation may loss the discriminative ability of features for small objects, we propose to amplify the feature map so that detailed information hidden in the last feature map can be enriched. Finally, we design a joint training loss function including center loss for both horizontal and oriented bounding boxes, and reduce the impact of small inter-class diversity on vehicle detection. The proposed framework is evaluated on the VEDAI dataset that consists of 9 vehicle categories. The experimental results show that the proposed framework outperforms previous approaches with a mean average precision of 60.4% and 60.1% in detecting horizontal and oriented bounding boxes respectively, which is about 8% better than Faster RCNN.


2020 ◽  
Vol 12 (9) ◽  
pp. 1435 ◽  
Author(s):  
Chengyuan Li ◽  
Bin Luo ◽  
Hailong Hong ◽  
Xin Su ◽  
Yajun Wang ◽  
...  

Different from object detection in natural image, optical remote sensing object detection is a challenging task, due to the diverse meteorological conditions, complex background, varied orientations, scale variations, etc. In this paper, to address this issue, we propose a novel object detection network (the global-local saliency constraint network, GLS-Net) that can make full use of the global semantic information and achieve more accurate oriented bounding boxes. More precisely, to improve the quality of the region proposals and bounding boxes, we first propose a saliency pyramid which combines a saliency algorithm with a feature pyramid network, to reduce the impact of complex background. Based on the saliency pyramid, we then propose a global attention module branch to enhance the semantic connection between the target and the global scenario. A fast feature fusion strategy is also used to combine the local object information based on the saliency pyramid with the global semantic information optimized by the attention mechanism. Finally, we use an angle-sensitive intersection over union (IoU) method to obtain a more accurate five-parameter representation of the oriented bounding boxes. Experiments with a publicly available object detection dataset for aerial images demonstrate that the proposed GLS-Net achieves a state-of-the-art detection performance.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 5999-6019 ◽  
Author(s):  
Yu Zhang ◽  
Jingwei Lu ◽  
Kui Wang ◽  
Jufeng Zhao ◽  
Guangmang Cui ◽  
...  

Author(s):  
Hao Cao ◽  
Rong Mo ◽  
Neng Wan ◽  
Qi Deng

Liaison graph is a necessary prerequisite of assembly sequence planning for mechanical products. Traditionally, it is generated via shape matching of joints among parts, but this strategy is invalid to truss structures because they lack patterns for shape matching. In this context, this article presents an intelligent method based on support vector machine to obtain liaison graphs of truss products automatically. This method defined three kinds of oriented bounding boxes to embody the relationships of the joints in truss structures. Based on them, a series of factors are deduced as training data for support vector machine. Furthermore, two algorithms are introduced to calculate oriented bounding boxes to facilitate the data extraction. By these processes, this method established the knowledge of joints and realized the intelligent construction of liaison graph without shape matching reasoning. To verify the effect of the method, an experimental implementation is presented. The results suggest that the proposed method could recognize most joint types and construct liaison graph automatically with sufficient sample training. The correct recognition rate is more than 85%. Comparing with back-propagation neural network, support vector machine is more accurate and stable in this case. As an alternative method, it could help the engineers to arrange the assembly plan for truss structures and other similar assemblies.


2015 ◽  
Vol 35 (3) ◽  
pp. 249-258 ◽  
Author(s):  
Hao Cao ◽  
Rong Mo ◽  
Neng Wan ◽  
Fang Shang ◽  
Chunlei Li ◽  
...  

Purpose – The purpose of this paper is to present an automated method for complicated truss structure subassembly identification. Design/methodology/approach – A community-detecting algorithm is introduced and adapted to reach the target. The ratio between oriented bounding boxes of parts is used as the weight to reflect the compact degree of assembly relationships. The authors also propose a method to merge nodes together at cut-vertex in model, by which the solving process could be accelerated. Findings – This method could identify the subassemblies of complex truss structures according to the specific requirements. Research limitations/implications – This research area is limited to truss structures. This research offers a new method in assembly sequences planning area. It could identify subassemblies in complex truss structures, with which the existing method is not adequate to deal. Practical implications – This method could facilitate the complex truss structures assembly planning, lower the human errors and reduce the planning time. Social implications – The method could inspire general assembly analysis planning. Originality/value – All authors of this paper confirm that this manuscript is original and has not been submitted or published elsewhere.


Sign in / Sign up

Export Citation Format

Share Document