scholarly journals Field Network—A New Method to Detect Directional Object

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4262
Author(s):  
Jin Liu ◽  
Yongjian Gao

As the development of object detection technology in computer vision, identifying objects is always an active yet challenging task, and even more efficient and accurate requirements are being imposed on state-of-the-art algorithms. However, many algorithms perform object box regression based on RPN(Region Proposal Network) and anchors, which cannot accurately describe the shape information of the object. In this paper, we propose a new object detection method called Field Network (FN) and Region Fitting Algorithm (RFA). It can solve these problems by Center Field. Center field reflects the probability of the pixel approaching the object center. Different from the previous methods, we abandoned anchors and ROI technologies, and propose the concept of Field. Field is the intensity of the object area, reflecting the probability of the object in the area. Based on the distribution of the probability density of the object center in the visual field perception area, we add the Object Field in the output part. And we abstract it into an Elliptic Field with normal distribution and use RFA to fit objects. Additionally, we add two fields to predict the x,y components of the object direction which contain the neural units in the field array. We extract the objects through these Fields. Moreover, our model is relatively simple and have smaller size, which is only 73 M. Our method improves performance considerably over baseline systems on DOTA, MS COCO and PASCAL VOC datasets, with overall performance competitive with recent state-of-the-art systems.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 567
Author(s):  
Donghun Yang ◽  
Kien Mai Mai Ngoc ◽  
Iksoo Shin ◽  
Kyong-Ha Lee ◽  
Myunggwon Hwang

To design an efficient deep learning model that can be used in the real-world, it is important to detect out-of-distribution (OOD) data well. Various studies have been conducted to solve the OOD problem. The current state-of-the-art approach uses a confidence score based on the Mahalanobis distance in a feature space. Although it outperformed the previous approaches, the results were sensitive to the quality of the trained model and the dataset complexity. Herein, we propose a novel OOD detection method that can train more efficient feature space for OOD detection. The proposed method uses an ensemble of the features trained using the softmax-based classifier and the network based on distance metric learning (DML). Through the complementary interaction of these two networks, the trained feature space has a more clumped distribution and can fit well on the Gaussian distribution by class. Therefore, OOD data can be efficiently detected by setting a threshold in the trained feature space. To evaluate the proposed method, we applied our method to various combinations of image datasets. The results show that the overall performance of the proposed approach is superior to those of other methods, including the state-of-the-art approach, on any combination of datasets.


2021 ◽  
Vol 11 (9) ◽  
pp. 3782
Author(s):  
Chu-Hui Lee ◽  
Chen-Wei Lin

Object detection is one of the important technologies in the field of computer vision. In the area of fashion apparel, object detection technology has various applications, such as apparel recognition, apparel detection, fashion recommendation, and online search. The recognition task is difficult for a computer because fashion apparel images have different characteristics of clothing appearance and material. Currently, fast and accurate object detection is the most important goal in this field. In this study, we proposed a two-phase fashion apparel detection method named YOLOv4-TPD (YOLOv4 Two-Phase Detection), based on the YOLOv4 algorithm, to address this challenge. The target categories for model detection were divided into the jacket, top, pants, skirt, and bag. According to the definition of inductive transfer learning, the purpose was to transfer the knowledge from the source domain to the target domain that could improve the effect of tasks in the target domain. Therefore, we used the two-phase training method to implement the transfer learning. Finally, the experimental results showed that the mAP of our model was better than the original YOLOv4 model through the two-phase transfer learning. The proposed model has multiple potential applications, such as an automatic labeling system, style retrieval, and similarity detection.


2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Jiangfan Feng ◽  
Fanjie Wang ◽  
Siqin Feng ◽  
Yongrong Peng

The performance of convolutional neural network- (CNN-) based object detection has achieved incredible success. Howbeit, existing CNN-based algorithms suffer from a problem that small-scale objects are difficult to detect because it may have lost its response when the feature map has reached a certain depth, and it is common that the scale of objects (such as cars, buses, and pedestrians) contained in traffic images and videos varies greatly. In this paper, we present a 32-layer multibranch convolutional neural network named MBNet for fast detecting objects in traffic scenes. Our model utilizes three detection branches, in which feature maps with a size of 16 × 16, 32 × 32, and 64 × 64 are used, respectively, to optimize the detection for large-, medium-, and small-scale objects. By means of a multitask loss function, our model can be trained end-to-end. The experimental results show that our model achieves state-of-the-art performance in terms of precision and recall rate, and the detection speed (up to 33 fps) is fast, which can meet the real-time requirements of industry.


2018 ◽  
Vol 232 ◽  
pp. 04036
Author(s):  
Jun Yin ◽  
Huadong Pan ◽  
Hui Su ◽  
Zhonggeng Liu ◽  
Zhirong Peng

We propose an object detection method that predicts the orientation bounding boxes (OBB) to estimate objects locations, scales and orientations based on YOLO (You Only Look Once), which is one of the top detection algorithms performing well both in accuracy and speed. Horizontal bounding boxes(HBB), which are not robust to orientation variances, are used in the existing object detection methods to detect targets. The proposed orientation invariant YOLO (OIYOLO) detector can effectively deal with the bird’s eye viewpoint images where the orientation angles of the objects are arbitrary. In order to estimate the rotated angle of objects, we design a new angle loss function. Therefore, the training of OIYOLO forces the network to learn the annotated orientation angle of objects, making OIYOLO orientation invariances. The proposed approach that predicts OBB can be applied in other detection frameworks. In additional, to evaluate the proposed OIYOLO detector, we create an UAV-DAHUA datasets that annotated with objects locations, scales and orientation angles accurately. Extensive experiments conducted on UAV-DAHUA and DOTA datasets demonstrate that OIYOLO achieves state-of-the-art detection performance with high efficiency comparing with the baseline YOLO algorithms.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2075
Author(s):  
Hao Chen ◽  
Hong Zheng

Anchor-based detectors are widely adopted in object detection. To improve the accuracy of object detection, multiple anchor boxes are intensively placed on the input image, yet most of them are invalid. Although anchor-free methods can reduce the number of useless anchor boxes, the invalid ones still occupy a high proportion. On this basis, this paper proposes an object-detection method based on center point proposals to reduce the number of useless anchor boxes while improving the quality of anchor boxes, balancing the proportion of positive and negative samples. By introducing the differentiation module in the shallow layer, the new method can alleviate the problem of missing detection caused by overlapping of center points. When trained and tested on COCO (Common Objects in Context) dataset, this algorithm records an increase of about 2% in APS (Average Precision of Small Object), reaching 27.8%. The detector designed in this study outperforms most of the state-of-the-art real-time detectors in speed and accuracy trade-off, achieving the AP of 43.2 in 137 ms.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1933
Author(s):  
Rixia Qin ◽  
Xiaohong Zhao ◽  
Wenbo Zhu ◽  
Qianqian Yang ◽  
Bo He ◽  
...  

Underwater fishing nets represent a danger faced by autonomous underwater vehicles (AUVs). To avoid irreparable damage to the AUV caused by fishing nets, the AUV needs to be able to identify and locate them autonomously and avoid them in advance. Whether the AUV can avoid fishing nets successfully depends on the accuracy and efficiency of detection. In this paper, we propose an object detection multiple receptive field network (MRF-Net), which is used to recognize and locate fishing nets using forward-looking sonar (FLS) images. The proposed architecture is a center-point-based detector, which uses a novel encoder-decoder structure to extract features and predict the center points and bounding box size. In addition, to reduce the interference of reverberation and speckle noises in the FLS image, we used a series of preprocessing operations to reduce the noises. We trained and tested the network with data collected in the sea using a Gemini 720i multi-beam forward-looking sonar and compared it with state-of-the-art networks for object detection. In order to further prove that our detector can be applied to the actual detection task, we also carried out the experiment of detecting and avoiding fishing nets in real-time in the sea with the embedded single board computer (SBC) module and the NVIDIA Jetson AGX Xavier embedded system of the AUV platform in our lab. The experimental results show that in terms of computational complexity, inference time, and prediction accuracy, MRF-Net is better than state-of-the-art networks. In addition, our fishing net avoidance experiment results indicate that the detection results of MRF-Net can support the accurate operation of the later obstacle avoidance algorithm.


2019 ◽  
Vol 9 (18) ◽  
pp. 3915
Author(s):  
Zhenyu Zhang ◽  
Hsi-Hsien Wei ◽  
Sang Guk Yum ◽  
Jieh-Haur Chen

Automatic object-detection technique can improve the efficiency of building data collection for semi-empirical methods to assess the seismic vulnerability of buildings at a regional scale. However, current structural element detection methods rely on color, texture and/or shape information of the object to be detected and are less flexible and reliable to detect columns or walls with unknown surface materials or deformed shapes in images. To overcome these limitations, this paper presents an innovative gray-level histogram (GLH) statistical feature-based object-detection method for automatically identifying structural elements, including columns and walls, in an image. This method starts with converting an RGB image (i.e. the image colors being a mix of red, green and blue light) into a grayscale image, followed by detecting vertical boundary lines using the Prewitt operator and the Hough transform. The detected lines divide the image into several sub-regions. Then, three GLH statistical parameters (variance, skewness, and kurtosis) of each sub-region are calculated. Finally, a column or a wall in a sub-region is recognized if these features of the sub-region satisfy the predefined criteria. This method was validated by testing the detection precision and recall for column and wall images. The results indicated the high accuracy of the proposed method in detecting structural elements with various surface treatments or deflected shapes. The proposed structural element detection method can be extended to detecting more structural characteristics and retrieving structural deficiencies from digital images in the future, promoting the automation in building data collection.


Author(s):  
Zhenhua Li ◽  
Weihui Jiang ◽  
Li Qiu ◽  
Zhenxing Li ◽  
Yanchun Xu

Background: Winding deformation is one of the most common faults in power transformers, which seriously threatens the safe operation of transformers. In order to discover the hidden trouble of transformer in time, it is of great significance to actively carry out the research of transformer winding deformation detection technology. Methods: In this paper, several methods of winding deformation detection with on-line detection prospects are summarized. The principles and characteristics of each method are analyzed, and the advantages and disadvantages of each method as well as the future research directions are expounded. Finally, aiming at the existing problems, the development direction of detection method for winding deformation in the future is prospected. Results: The on-line frequency response analysis method is still immature, and the vibration detection method is still in the theoretical research stage. Conclusion: The ΔV − I1 locus method provides a new direction for on-line detection of transformer winding deformation faults, which has certain application prospects and practical engineering value.


Sign in / Sign up

Export Citation Format

Share Document