scholarly journals Design and Implementation of Intelligent Inspection and Alarm Flight System for Epidemic Prevention

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 68
Author(s):  
Jiwei Fan ◽  
Xiaogang Yang ◽  
Ruitao Lu ◽  
Xueli Xie ◽  
Weipeng Li

Unmanned aerial vehicles (UAV) and related technologies have played an active role in the prevention and control of novel coronaviruses at home and abroad, especially in epidemic prevention, surveillance, and elimination. However, the existing UAVs have a single function, limited processing capacity, and poor interaction. To overcome these shortcomings, we designed an intelligent anti-epidemic patrol detection and warning flight system, which integrates UAV autonomous navigation, deep learning, intelligent voice, and other technologies. Based on the convolution neural network and deep learning technology, the system possesses a crowd density detection method and a face mask detection method, which can detect the position of dense crowds. Intelligent voice alarm technology was used to achieve an intelligent alarm system for abnormal situations, such as crowd-gathering areas and people without masks, and to carry out intelligent dissemination of epidemic prevention policies, which provides a powerful technical means for epidemic prevention and delaying their spread. To verify the superiority and feasibility of the system, high-precision online analysis was carried out for the crowd in the inspection area, and pedestrians’ faces were detected on the ground to identify whether they were wearing a mask. The experimental results show that the mean absolute error (MAE) of the crowd density detection was less than 8.4, and the mean average precision (mAP) of face mask detection was 61.42%. The system can provide convenient and accurate evaluation information for decision-makers and meets the requirements of real-time and accurate detection.

Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2137 ◽  
Author(s):  
Soojeong Lee ◽  
Gangseong Lee ◽  
Gwanggil Jeon

Oscillometric blood pressure (BP) monitors currently estimate a single point but do not identify variations in response to physiological characteristics. In this paper, to analyze BP’s normality based on oscillometric measurements, we use statistical approaches including kurtosis, skewness, Kolmogorov-Smirnov, and correlation tests. Then, to mitigate uncertainties, we use a deep learning method to determine the confidence limits (CLs) of BP measurements based on their normality. The proposed deep learning regression model decreases the standard deviation of error (SDE) of the mean error and the mean absolute error and reduces the uncertainties of the CLs and SDEs of the proposed technique. We validate the normality of the distribution of the BP estimation which fits the standard normal distribution very well. We use a rank test in the deep learning technique to demonstrate the independence of the artificial systolic BP and diastolic BP estimations. We perform statistical tests to verify the normality of the BP measurements for individual subjects. The proposed methodology provides accurate BP estimations and reduces the uncertainties associated with the CLs and SDEs using the deep learning algorithm.


Optics ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 87-95
Author(s):  
Xudong Yuan ◽  
Yaguang Xu ◽  
Ruizhi Zhao ◽  
Xuhao Hong ◽  
Ronger Lu ◽  
...  

The Laguerre-Gaussian (LG) beam demonstrates great potential for optical communication due to its orthogonality between different eigenstates, and has gained increased research interest in recent years. Here, we propose a dual-output mode analysis method based on deep learning that can accurately obtain both the mode weight and phase information of multimode LG beams. We reconstruct the LG beams based on the result predicted by the convolutional neural network. It shows that the correlation coefficient values after reconstruction are above 0.9999, and the mean absolute error (MAE) of the mode weights and phases are about 1.4 × 10-3 and 2.9 × 10-3, respectively. The model still maintains relatively accurate prediction for the associated unknown data set and the noise-disturbed samples. In addition, the computation time of the model for a single test sample takes only 0.975 ms on average. These results show that our method has good abilities of generalization and robustness and allows for nearly real-time modal analysis.


2021 ◽  
Vol 20 ◽  
pp. 153303382110624
Author(s):  
Xudong Xue ◽  
Yi Ding ◽  
Jun Shi ◽  
Xiaoyu Hao ◽  
Xiangbin Li ◽  
...  

Objective: To generate synthetic CT (sCT) images with high quality from CBCT and planning CT (pCT) for dose calculation by using deep learning methods. Methods: 169 NPC patients with a total of 20926 slices of CBCT and pCT images were included. In this study the CycleGAN, Pix2pix and U-Net models were used to generate the sCT images. The Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index (SSIM) were used to quantify the accuracy of the proposed models in a testing cohort of 34 patients. Radiation dose were calculated on pCT and sCT following the same protocol. Dose distributions were evaluated for 4 patients by comparing the dose-volume-histogram (DVH) and 2D gamma index analysis. Results: The average MAE and RMSE values between sCT by three models and pCT reduced by 15.4 HU and 26.8 HU at least, while the mean PSNR and SSIM metrics between sCT by different models and pCT added by 10.6 and 0.05 at most, respectively. There were only slight differences for DVH of selected contours between different plans. The passing rates of 2D gamma index analysis under 3 mm/3% 3 mm/2%, 2 mm/3%and 2 mm/2% criteria were all higher than 95%. Conclusions: All the sCT had achieved better evaluation metrics than those of original CBCT, while the performance of CycleGAN model was proved to be best among three methods. The dosimetric agreement confirmed the HU accuracy and consistent anatomical structures of sCT by deep learning methods.


This paper presents a deep learning approach for age estimation of human beings using their facial images. The different racial groups based on skin colour have been incorporated in the annotations of the images in the dataset, while ensuring an adequate distribution of subjects across the racial groups so as to achieve an accurate Automatic Facial Age Estimation (AFAE). The principle of transfer learning is applied to the ResNet50 Convolutional Neural Network (CNN) initially pretrained for the task of object classification and finetuning it’s hyperparameters to propose an AFAE system that can be used to automate ages of humans across multiple racial groups. The mean absolute error of 4.25 years is obtained at the end of the research which proved the effectiveness and superiority of the proposed method.


Horticulturae ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 21
Author(s):  
Jizhang Wang ◽  
Zhiheng Gao ◽  
Yun Zhang ◽  
Jing Zhou ◽  
Jianzhi Wu ◽  
...  

In order to realize the real-time and accurate detection of potted flowers on benches, in this paper we propose a method based on the ZED 2 stereo camera and the YOLO V4-Tiny deep learning algorithm for potted flower detection and location. First, an automatic detection model of flowers was established based on the YOLO V4-Tiny convolutional neural network (CNN) model, and the center points on the pixel plane of the flowers were obtained according to the prediction box. Then, the real-time 3D point cloud information obtained by the ZED 2 camera was used to calculate the actual position of the flowers. The test results showed that the mean average precision (MAP) and recall rate of the training model was 89.72% and 80%, respectively, and the real-time average detection frame rate of the model deployed under Jetson TX2 was 16 FPS. The results of the occlusion experiment showed that when the canopy overlap ratio between the two flowers is more than 10%, the recognition accuracy will be affected. The mean absolute error of the flower center location based on 3D point cloud information of the ZED 2 camera was 18.1 mm, and the maximum locating error of the flower center was 25.8 mm under different light radiation conditions. The method in this paper establishes the relationship between the detection target of flowers and the actual spatial location, which has reference significance for the machinery and automatic management of potted flowers on benches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kaori Ishii ◽  
Ryo Asaoka ◽  
Takashi Omoto ◽  
Shingo Mitaki ◽  
Yuri Fujino ◽  
...  

AbstractThe purpose of the current study was to predict intraocular pressure (IOP) using color fundus photography with a deep learning (DL) model, or, systemic variables with a multivariate linear regression model (MLM), along with least absolute shrinkage and selection operator regression (LASSO), support vector machine (SVM), and Random Forest: (RF). Training dataset included 3883 examinations from 3883 eyes of 1945 subjects and testing dataset 289 examinations from 289 eyes from 146 subjects. With the training dataset, MLM was constructed to predict IOP using 35 systemic variables and 25 blood measurements. A DL model was developed to predict IOP from color fundus photographs. The prediction accuracy of each model was evaluated through the absolute error and the marginal R-squared (mR2), using the testing dataset. The mean absolute error with MLM was 2.29 mmHg, which was significantly smaller than that with DL (2.70 dB). The mR2 with MLM was 0.15, whereas that with DL was 0.0066. The mean absolute error (between 2.24 and 2.30 mmHg) and mR2 (between 0.11 and 0.15) with LASSO, SVM and RF were similar to or poorer than MLM. A DL model to predict IOP using color fundus photography proved far less accurate than MLM using systemic variables.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 72
Author(s):  
Viktorija Valiuškaitė ◽  
Vidas Raudonis ◽  
Rytis Maskeliūnas ◽  
Robertas Damaševičius ◽  
Tomas Krilavičius

We propose a deep learning method based on the Region Based Convolutional Neural Networks (R-CNN) architecture for the evaluation of sperm head motility in human semen videos. The neural network performs the segmentation of sperm heads, while the proposed central coordinate tracking algorithm allows us to calculate the movement speed of sperm heads. We have achieved 91.77% (95% CI, 91.11–92.43%) accuracy of sperm head detection on the VISEM (A Multimodal Video Dataset of Human Spermatozoa) sperm sample video dataset. The mean absolute error (MAE) of sperm head vitality prediction was 2.92 (95% CI, 2.46–3.37), while the Pearson correlation between actual and predicted sperm head vitality was 0.969. The results of the experiments presented below will show the applicability of the proposed method to be used in automated artificial insemination workflow.


2020 ◽  
Vol 12 (22) ◽  
pp. 3833
Author(s):  
Chao Ji ◽  
Hong Tang

Stereo photogrammetric survey used to be used to extract the height of buildings, then to convert the height to number of stories through certain rules to estimate the number of stories of buildings by means of satellite remote sensing. In contrast, we propose a new method using deep learning to estimate the number of stories of buildings from monocular optical satellite image end to end in this paper. To the best of our knowledge, this is the first attempt to directly estimate the number of stories of buildings from monocular satellite images. Specifically, in the proposed method, we extend a classic object detection network, i.e., Mask R-CNN, by adding a new head to predict the number of stories of detected buildings from satellite images. GF-2 images from nine cities in China are used to validate the effectiveness of the proposed methods. The result of experiment show that the mean absolute error of prediction on buildings whose stories between 1–7, 8–20, and above 20 are 1.329, 3.546, and 8.317, respectively, which indicate that our method has possible application potentials in low-rise buildings, but the accuracy in middle-rise and high-rise buildings needs to be further improved.


2021 ◽  
Author(s):  
Euidam Kim ◽  
Yoonsun Chung

Abstract Since radiation sensitivity prediction can be used in various field, we investigate the feasibility of an in vitro radiation sensitivity prediction model using a deep neural network. A microarray of the National Cancer Institute-60 tumor cell lines and clonogenic surviving fraction at an absorbed dose of 2 Gy values are used to predict radiation sensitivity. The prediction model is based on convolutional neural network and 6-fold cross-validation approach is applied to validate the model. Of the 174 samples, 170 (97.7%) samples show less than 10% and 4 (2.30%) show more than 10% of relative error, respectively. Through an additional validation, model accurately predict 172 out of 174 samples, representing a prediction accuracy of 98.85% under the criteria of absolute error < 0.01 or the relative error < 10%. This results demonstrate that in vitro radiation sensitivity prediction from gene expression can be carried out with the deep learning technology.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0247440
Author(s):  
Adina Rahim ◽  
Ayesha Maqbool ◽  
Tauseef Rana

The purpose of this work is to provide an effective social distance monitoring solution in low light environments in a pandemic situation. The raging coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has brought a global crisis with its deadly spread all over the world. In the absence of an effective treatment and vaccine the efforts to control this pandemic strictly rely on personal preventive actions, e.g., handwashing, face mask usage, environmental cleaning, and most importantly on social distancing which is the only expedient approach to cope with this situation. Low light environments can become a problem in the spread of disease because of people’s night gatherings. Especially, in summers when the global temperature is at its peak, the situation can become more critical. Mostly, in cities where people have congested homes and no proper air cross-system is available. So, they find ways to get out of their homes with their families during the night to take fresh air. In such a situation, it is necessary to take effective measures to monitor the safety distance criteria to avoid more positive cases and to control the death toll. In this paper, a deep learning-based solution is proposed for the above-stated problem. The proposed framework utilizes the you only look once v4 (YOLO v4) model for real-time object detection and the social distance measuring approach is introduced with a single motionless time of flight (ToF) camera. The risk factor is indicated based on the calculated distance and safety distance violations are highlighted. Experimental results show that the proposed model exhibits good performance with 97.84% mean average precision (mAP) score and the observed mean absolute error (MAE) between actual and measured social distance values is 1.01 cm.


Sign in / Sign up

Export Citation Format

Share Document