scholarly journals Offset Detection of Grate Trolley’s Side Plate Based on YOLOv4

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yueming Wang ◽  
Zhenru Li ◽  
Liangrui Fang ◽  
Qi Li

Side plate offset is one of the grate system faults. If it is not dealt with in time, some accidents will occur and economic losses will be made. Aiming at the problems like time-consuming, labour-wasting, and low intelligent by the side plate offset detection method manually, an autoside plate offset detection method is proposed, based on You Only Look Once version 4 (YOLOv4). Two cameras were fixed to collect the image information of the grate trolley’s side plate. With reference to the grate trolley’s operation, the offset judgment rules were set. YOLOv4 object detection algorithm was used to detect the side plate and trolley’s chassis frame in video frame images. A baseline was set according to the position information of the trolley’s chassis frame output by detection, and then, the position intervals between side plates and the baseline could be determined by calculation. According to the judgment rules, the scheme in this paper could detect the offset fault of the trolley’s side plate timely, and an alarm would be made automatically when faults are detected. Our video images of the trolley’s side plate were collected and sorted in Baogang Group sintering plant for testing. In this experiment, no error judgment was made, and the average detection and judgment time was 0.024 s. In this paper, rather than manually, the real-time automatic detection was realized to detect the offset fault of the trolley’s side plate so as to provide a new solution for offset detection of the grate trolley’s side plate.


Recognition and detection of an object in the watched scenes is a characteristic organic capacity. Animals and human being play out this easily in day by day life to move without crashes, to discover sustenance, dodge dangers, etc. Be that as it may, comparable PC techniques and calculations for scene examination are not all that direct, in spite of their exceptional advancement. Object detection is the process in which finding or recognizing cases of articles (for instance faces, mutts or structures) in computerized pictures or recordings. This is the fundamental task in computer. For detecting the instance of an object and to pictures having a place with an article classification object detection method usually used learning algorithm and extracted features. This paper proposed a method for moving object detection and vehicle detection.



Author(s):  
Yuxia Wang ◽  
Wenzhu Yang ◽  
Tongtong Yuan ◽  
Qian Li

Lower detection accuracy and insufficient detection ability for small objects are the main problems of the region-free object detection algorithm. Aiming at solving the abovementioned problems, an improved object detection method using feature map refinement and anchor optimization is proposed. Firstly, the reverse fusion operation is performed on each of the object detection layer, which can provide the lower layers with more semantic information by the fusion of detection features at different levels. Secondly, the self-attention module is used to refine each detection feature map, calibrates the features between channels, and enhances the expression ability of local features. In addition, the anchor optimization model is introduced on each feature layer associated with anchors, and the anchors with higher probability of containing an object and more closely match the location and size of the object are obtained. In this model, semantic features are used to confirm and remove negative anchors to reduce search space of the objects, and preliminary adjustments are made to the locations and sizes of anchors. Comprehensive experimental results on PASCAL VOC detection dataset demonstrate the effectiveness of the proposed method. In particular, with VGG-16 and lower dimension 300×300 input size, the proposed method achieves a mAP of 79.1% on VOC 2007 test set with an inference speed of 24.7 milliseconds per image.



2021 ◽  
Vol 3 (2) ◽  
pp. 117-125
Author(s):  
M Fadhilur Rahman ◽  
Bambang Bambang

Garbage is a never-ending problem in human life. Many of the problems caused by waste actually stem from human ignorance of the environment. Several solutions have been proposed to solve and avoid problems from the waste, one of which is making waste detection that can be applied directly to certain devices. This study aims to apply an object detection method in the form of Faster R-CNN to detect and classify at a speed that allows computers to detect trash objects directly through real-time video. The test results in this study indicate the method used can detect trash objects in 100 images with an accuracy value of 74%, and to detect real-time video with video frame rates in the range of 1 frame per second (fps).



2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yangmei Zhang

This paper is aimed at studying underwater object detection and positioning. Objects are detected and positioned through an underwater scene segmentation-based weak object detection algorithm and underwater positioning technology based on the three-dimensional (3D) omnidirectional magnetic induction smart sensor. The proposed weak object detection involves a predesigned U-shaped network- (U-Net-) architectured image segmentation network, which has been improved before application. The key factor of underwater positioning technology based on 3D omnidirectional magnetic induction is the magnetic induction intensity. The results show that the image-enhanced object detection method improves the accuracy of Yellow Croaker, Goldfish, and Mandarin Fish by 3.2%, 1.5%, and 1.6%, respectively. In terms of sensor positioning technology, under the positioning Signal-to-Noise Ratio (SNR) of 15 dB and 20 dB, the curve trends of actual distance and positioning distance are consistent, while SNR = 10   dB , the two curves deviate greatly. The research conclusions read as follows: an underwater scene segmentation-based weak object detection method is proposed for invalid underwater object samples from poor labeling, which can effectively segment the background from underwater objects, remove the negative impact of invalid samples, and improve the precision of weak object detection. The positioning model based on a 3D coil magnetic induction sensor can obtain more accurate positioning coordinates. The effectiveness of 3D omnidirectional magnetic induction coil underwater positioning technology is verified by simulation experiments.



2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Manhuai Lu ◽  
Liqin Chen

The accuracy of object detection based on kitchen appliance scene images can suffer severely from external disturbances such as various levels of specular reflection, uneven lighting, and spurious lighting, as well as internal scene-related disturbances such as invalid edges and pattern information unrelated to the object of interest. The present study addresses these unique challenges by proposing an object detection method based on improved faster R-CNN algorithm. The improved method can identify object regions scattered in various areas of complex appliance scenes quickly and automatically. In this paper, we put forward a feature enhancement framework, named deeper region proposal network (D-RPN). In D-RPN, a feature enhancement module is designed to more effectively extract feature information of an object on kitchen appliance scene. Then, we reconstruct a U-shaped network structure using a series of feature enhancement modules. We have evaluated the proposed D-RPN on the dataset we created. It includes all kinds of kitchen appliance control panels captured in nature scene by image collector. In our experiments, the best-performing object detection method obtained a mean average precision mAP value of 89.84% in the testing dataset. The test results show that the proposed improved algorithm achieves higher detecting accuracy than state-of-the-art object detection methods. Finally, our proposed detection method can further be used in text recognition.



2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Guo X. Hu ◽  
Zhong Yang ◽  
Lei Hu ◽  
Li Huang ◽  
Jia M. Han

The existing object detection algorithm based on the deep convolution neural network needs to carry out multilevel convolution and pooling operations to the entire image in order to extract a deep semantic features of the image. The detection models can get better results for big object. However, those models fail to detect small objects that have low resolution and are greatly influenced by noise because the features after repeated convolution operations of existing models do not fully represent the essential characteristics of the small objects. In this paper, we can achieve good detection accuracy by extracting the features at different convolution levels of the object and using the multiscale features to detect small objects. For our detection model, we extract the features of the image from their third, fourth, and 5th convolutions, respectively, and then these three scales features are concatenated into a one-dimensional vector. The vector is used to classify objects by classifiers and locate position information of objects by regression of bounding box. Through testing, the detection accuracy of our model for small objects is 11% higher than the state-of-the-art models. In addition, we also used the model to detect aircraft in remote sensing images and achieved good results.



2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Bao-Yuan Chen ◽  
Yu-Kun Shen ◽  
Kun Sun

At present, object detectors based on convolution neural networks generally rely on the last layer of features extracted by the feature extraction network. In the process of continuous convolution and pooling of deep features, the position information cannot be completely transferred backward. This paper proposes a multiscale feature reuse detection model, which includes the basic feature extraction network DenseNet, feature fusion network, multiscale anchor region proposal network, and classification and regression network. The fusion of high-dimensional features and low-dimensional features not only strengthens the model's sensitivity to objects of different sizes but also strengthens the transmission of information, so that the feature map has rich deep semantic information and shallow location information at the same time, which significantly improves the robustness and detection accuracy of the model. The algorithm is trained and tested in Pascal VOC2007 dataset. The experimental results show that the mean average precision of the objects in the dataset is 73.87%. At the same time, compared with the mainstream faster RCNN and SSD detection models, the mean average precision of object detection algorithm based on DenseNet is improved by 5.63% and 3.86%, respectively.



2021 ◽  
Vol 10 (11) ◽  
pp. 742
Author(s):  
Xiaoyue Luo ◽  
Yanhui Wang ◽  
Benhe Cai ◽  
Zhanxing Li

Previous research on moving object detection in traffic surveillance video has mostly adopted a single threshold to eliminate the noise caused by external environmental interference, resulting in low accuracy and low efficiency of moving object detection. Therefore, we propose a moving object detection method that considers the difference of image spatial threshold, i.e., a moving object detection method using adaptive threshold (MOD-AT for short). In particular, based on the homograph method, we first establish the mapping relationship between the geometric-imaging characteristics of moving objects in the image space and the minimum circumscribed rectangle (BLOB) of moving objects in the geographic space to calculate the projected size of moving objects in the image space, by which we can set an adaptive threshold for each moving object to precisely remove the noise interference during moving object detection. Further, we propose a moving object detection algorithm called GMM_BLOB (GMM denotes Gaussian mixture model) to achieve high-precision detection and noise removal of moving objects. The case-study results show the following: (1) Compared with the existing object detection algorithm, the median error (MD) of the MOD-AT algorithm is reduced by 1.2–11.05%, and the mean error (MN) is reduced by 1.5–15.5%, indicating that the accuracy of the MOD-AT algorithm is higher in single-frame detection; (2) in terms of overall accuracy, the performance and time efficiency of the MOD-AT algorithm is improved by 7.9–24.3%, reflecting the higher efficiency of the MOD-AT algorithm; (3) the average accuracy (MP) of the MOD-AT algorithm is improved by 17.13–44.4%, the average recall (MR) by 7.98–24.38%, and the average F1-score (MF) by 10.13–33.97%; in general, the MOD-AT algorithm is more accurate, efficient, and robust.





Sign in / Sign up

Export Citation Format

Share Document