scholarly journals Pupil Size Prediction Techniques Based on Convolution Neural Network

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4965
Author(s):  
Allen Jong-Woei Whang ◽  
Yi-Yung Chen ◽  
Wei-Chieh Tseng ◽  
Chih-Hsien Tsai ◽  
Yi-Ping Chao ◽  
...  

The size of one’s pupil can indicate one’s physical condition and mental state. When we search related papers about AI and the pupil, most studies focused on eye-tracking. This paper proposes an algorithm that can calculate pupil size based on a convolution neural network (CNN). Usually, the shape of the pupil is not round, and 50% of pupils can be calculated using ellipses as the best fitting shapes. This paper uses the major and minor axes of an ellipse to represent the size of pupils and uses the two parameters as the output of the network. Regarding the input of the network, the dataset is in video format (continuous frames). Taking each frame from the videos and using these to train the CNN model may cause overfitting since the images are too similar. This study used data augmentation and calculated the structural similarity to ensure that the images had a certain degree of difference to avoid this problem. For optimizing the network structure, this study compared the mean error with changes in the depth of the network and the field of view (FOV) of the convolution filter. The result shows that both deepening the network and widening the FOV of the convolution filter can reduce the mean error. According to the results, the mean error of the pupil length is 5.437% and the pupil area is 10.57%. It can operate in low-cost mobile embedded systems at 35 frames per second, demonstrating that low-cost designs can be used for pupil size prediction.

Author(s):  
Xin Zhang ◽  
Yi-Yung Chen ◽  
Jong-Woei Whang ◽  
Chih-Hsien Tsai ◽  
Wei-Chieh Tseng

Drones ◽  
2020 ◽  
Vol 4 (4) ◽  
pp. 76
Author(s):  
A. Bulent Koc ◽  
Patrick T. Anderson ◽  
John P. Chastain ◽  
Christopher Post

Poultry production requires electricity for optimal climate control throughout the year. Demand for electricity in poultry production peaks during summer months when solar irradiation is also high. Installing solar photovoltaic (PV) panels on the rooftops of poultry houses has potential for reducing the energy costs by reducing the electricity demand charges of utility companies. The objective of this research was to estimate the rooftop areas of poultry houses for possible PV installation using aerial images acquired with a commercially available low-cost unmanned aerial vehicle (UAV). Overhead images of 31 broiler houses were captured with a UAV to assess their potential for solar energy applications. Building plan dimensions were acquired and building heights were independently measured manually. Images were captured by flying the UAV in a double grid flight path at a 69-m altitude using an onboard 4K camera at an angle of −80° from the horizon with 70% and 80% overlaps. The captured images were processed using Agisoft Photoscan Professional photogrammetry software. Orthophotos of the study areas were generated from the acquired 3D image sequences using structure from motion (SfM) techniques. Building rooftop overhang obscured building footprint in aerial imagery. To accurately measure building dimensions, 0.91 m was subtracted from building roof width and 0.61 m was subtracted from roof length based on blueprint dimensions of the poultry houses. The actual building widths and lengths ranged from 10.8 to 184.0 m and the mean measurement error using the UAV-derived orthophotos was 0.69% for all planar dimensions. The average error for building length was 1.66 ± 0.48 m and the average error for widths was 0.047 ± 0.13 m. Building sidewall, side entrance and peak heights ranged from 1.9 to 5.6 m and the mean error was 0.06 ± 0.04 m or 1.2%. When compared to the horizontal accuracy of the same building measurements taken from readily available satellite imagery, the mean error in satellite images was −0.36%. The average length error was −0.46 ± 0.49 m and −0.44 ± 0.14 m for building widths. The satellite orthomosaics were more accurate for length estimations and the UAV orthomosaics were more accurate for width estimations. This disparity was likely due to the flight altitude, camera field of view, and building shape. The results proved that a low-cost UAV and photogrammetric SfM can be used to create digital surface models and orthomosaics of poultry houses without the need for survey-grade equipment or ground control points.


Water ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 386 ◽  
Author(s):  
Na Zhang ◽  
Qinghe Zhang ◽  
Keh-Han Wang ◽  
Guoliang Zou ◽  
Xuelian Jiang ◽  
...  

In this paper, a new method for predicting wave overtopping discharges of Accropode armored breakwaters using the non-hydrostatic wave model Simulating WAves till SHore (SWASH) is presented. The apparent friction coefficient concept is proposed to allow the bottom shear stress term calculated in the momentum equation to reasonably represent the effect of comprehensive energy dissipation caused by the roughness and seepage during the wave overtopping process. A large number of wave overtopping cases are simulated with a calibrated SWASH model to determine the values of equivalent roughness coefficients so that the apparent friction coefficients can be estimated to achieve the conditions with good agreement between numerical overtopping discharges and those from the EurOtop neural network model. The relative crest freeboard and the wave steepness are found to be the two main factors affecting the equivalent roughness coefficient. A derived empirical formula for the estimation of an equivalent roughness coefficient is presented. The simulated overtopping discharges by the SWASH model using the values of the equivalent roughness coefficient estimated from the empirical formula are compared with the physical model test results. It is found that the mean error rate from the present model predictions is 0.24, which is slightly better than the mean error rate of 0.26 from the EurOtop neural network model.


2020 ◽  
Vol 21 (S1) ◽  
Author(s):  
Dina Abdelhafiz ◽  
Jinbo Bi ◽  
Reda Ammar ◽  
Clifford Yang ◽  
Sheida Nabavi

Abstract Background Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). Results We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. Conclusions The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.


2021 ◽  
Vol 9 ◽  
Author(s):  
Shui-Hua Wang ◽  
Ziquan Zhu ◽  
Yu-Dong Zhang

Objective: COVID-19 is a sort of infectious disease caused by a new strain of coronavirus. This study aims to develop a more accurate COVID-19 diagnosis system.Methods: First, the n-conv module (nCM) is introduced. Then we built a 12-layer convolutional neural network (12l-CNN) as the backbone network. Afterwards, PatchShuffle was introduced to integrate with 12l-CNN as a regularization term of the loss function. Our model was named PSCNN. Moreover, multiple-way data augmentation and Grad-CAM are employed to avoid overfitting and locating lung lesions.Results: The mean and standard variation values of the seven measures of our model were 95.28 ± 1.03 (sensitivity), 95.78 ± 0.87 (specificity), 95.76 ± 0.86 (precision), 95.53 ± 0.83 (accuracy), 95.52 ± 0.83 (F1 score), 91.7 ± 1.65 (MCC), and 95.52 ± 0.83 (FMI).Conclusion: Our PSCNN is better than 10 state-of-the-art models. Further, we validate the optimal hyperparameters in our model and demonstrate the effectiveness of PatchShuffle.


Sign in / Sign up

Export Citation Format

Share Document