scholarly journals SHADOW PROCESSING TECHNOLOGY OF AGRICULTURAL PLANT VIDEO IMAGE BASED ON PROBABLE LEARNING PIXEL CLASSIFICATION

2020 ◽  
Vol 60 (1) ◽  
pp. 201-210
Author(s):  
Cheng Yang ◽  
Ping Wang ◽  
Yan Bao

In order to solve the problem of difficult pre-processing of crop video image shadows, a probable learning pixel classification method is proposed to study its processing technology. The algorithm effectively detects the shadow area by performing intelligent video collaborative detection on the shaded parts of the crop video sequence. Firstly, the cloud collaborative detection algorithm that can be widely used in agriculture was proposed. The video key frame was obtained and the background modeling algorithm with strong adaptability to crop illumination was applied to realize real-time detection of the target, so as to construct the crop pixel model. Finally, the proposed algorithm and the constructed model are applied to the processing of shadows of agricultural plant video images for experimental verification. The results show that in video frames 47, 194 and 258, the probable learning pixel classification method can be used to determine the shaded part of each frame, which can greatly improve the detection accuracy of crop shadows. The research in this paper shows that the probability learning pixel classification method can better enhance the shadow robustness and accuracy of crop video images.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Fengzhen Jia ◽  
Shiqiang Xu ◽  
Jiaofei Huo

With the increasing development of multimedia teaching, the combination of virtual reality (VR) and video image control has very attractive development prospects in ideological and political teaching, for example, the use of virtual technology in games and so on. However, most virtual reality environments are currently built, and the functional development of artificial intelligence multimedia teaching systems is not comprehensive. An artificial intelligence VR video image control system is constructed for the multimedia teaching system. This article analyzes the development of artificial intelligence multimedia teaching systems and compares the detection performance and efficiency of traditional methods and artificial intelligence multimedia VR ideological and political teaching. Research shows that, in the use of VR to control the images of ideological and political teaching, the average accuracy of these ten video images is 75.68%. This shows that the video image classification and detection algorithm model based on artificial intelligence in this paper can extract deeper and more abstract features to classify the target. The artificial intelligence VR video image control algorithm constructed in this paper can reduce the maximum failure rate by 49.16%, 61.02%, and 66.94%, respectively. Compared with the traditional algorithm, the artificial intelligence VR video image control algorithm constructed in this paper can reduce the storage access delay time of 10 different video images by an average of 15.93%, can obtain about 9.37% performance optimization, and can reduce the video image control time by 7.28% and 10.63%, respectively. For pictures, the artificial intelligence VR video image control system in this article can increase the performance by up to 28.49%.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Hongxin Tang

At present, the existing algorithm for detecting the parabola of tennis serves neglects the pre-estimation of the global motion information of tennis balls, which leads to great error and low recognition rate. Therefore, a new algorithm for detecting the parabola of tennis service based on video image analysis is proposed. The global motion information is estimated in advance, and the motion feature of the target is extracted. A tennis appearance model is established by sparse representation, and the data of high-resolution tennis flight appearance model are processed by data fusion technology to track the parabolic trajectory. Based on the analysis of the characteristics of the serve mechanics, according to the nonlinear transformation of the parabolic trajectory state vector, the parabolic trajectory starting point is determined, the parabolic trajectory is obtained, and the detection algorithm of the parabolic service is designed. Experimental results show that compared with the other two algorithms, the algorithm designed in this paper can recognize the trajectory of the parabola at different stages, and the detection accuracy of the parabola is higher in the three-dimensional space of the tennis service.


Author(s):  
Dongxian Yu ◽  
Jiatao Kang ◽  
Zaihui Cao ◽  
Neha Jain

In order to solve the current traffic sign detection technology due to the interference of various complex factors, it is difficult to effectively carry out the correct detection of traffic signs, and the robustness is weak, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed.First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo.Secondly, in order to improve the extraction ability Of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) was defined, and candidate Regions were selected through the ROI detector.Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background.Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy.In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested, and the results show that the proposed algorithm has higher detection accuracy and robustness compared with the current traffic sign recognition technology.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1081
Author(s):  
Tamon Miyake ◽  
Shintaro Yamamoto ◽  
Satoshi Hosono ◽  
Satoshi Funabashi ◽  
Zhengxue Cheng ◽  
...  

Gait phase detection, which detects foot-contact and foot-off states during walking, is important for various applications, such as synchronous robotic assistance and health monitoring. Gait phase detection systems have been proposed with various wearable devices, sensing inertial, electromyography, or force myography information. In this paper, we present a novel gait phase detection system with static standing-based calibration using muscle deformation information. The gait phase detection algorithm can be calibrated within a short time using muscle deformation data by standing in several postures; it is not necessary to collect data while walking for calibration. A logistic regression algorithm is used as the machine learning algorithm, and the probability output is adjusted based on the angular velocity of the sensor. An experiment is performed with 10 subjects, and the detection accuracy of foot-contact and foot-off states is evaluated using video data for each subject. The median accuracy is approximately 90% during walking based on calibration for 60 s, which shows the feasibility of the static standing-based calibration method using muscle deformation information for foot-contact and foot-off state detection.


2016 ◽  
Vol 23 (4) ◽  
pp. 579-592 ◽  
Author(s):  
Jaromir Przybyło ◽  
Eliasz Kańtoch ◽  
Mirosław Jabłoński ◽  
Piotr Augustyniak

Abstract Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb) using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm) for fluorescent light to 6.6 bpm for dim daylight.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Ming Xia ◽  
Peiliang Sun ◽  
Xiaoyan Wang ◽  
Yan Jin ◽  
Qingzhang Chen

Localization is a fundamental research issue in wireless sensor networks (WSNs). In most existing localization schemes, several beacons are used to determine the locations of sensor nodes. These localization mechanisms are frequently based on an assumption that the locations of beacons are known. Nevertheless, for many WSN systems deployed in unstable environments, beacons may be moved unexpectedly; that is, beacons are drifting, and their location information will no longer be reliable. As a result, the accuracy of localization will be greatly affected. In this paper, we propose a distributed beacon drifting detection algorithm to locate those accidentally moved beacons. In the proposed algorithm, we designed both beacon self-scoring and beacon-to-beacon negotiation mechanisms to improve detection accuracy while keeping the algorithm lightweight. Experimental results show that the algorithm achieves its designed goals.


Sign in / Sign up

Export Citation Format

Share Document