scholarly journals Research on Efficient Deep Learning Algorithm Based on ShuffleGhost in the Field of Virtual Reality

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bangtong Huang ◽  
Hongquan Zhang ◽  
Zihong Chen ◽  
Lingling Li ◽  
Lihua Shi

Deep learning algorithms are facing the limitation in virtual reality application due to the cost of memory, computation, and real-time computation problem. Models with rigorous performance might suffer from enormous parameters and large-scale structure, and it would be hard to replant them onto embedded devices. In this paper, with the inspiration of GhostNet, we proposed an efficient structure ShuffleGhost to make use of the redundancy in feature maps to alleviate the cost of computations, as well as tackling some drawbacks of GhostNet. Since GhostNet suffers from high computation of convolution in Ghost module and shortcut, the restriction of downsampling would make it more difficult to apply Ghost module and Ghost bottleneck to other backbone. This paper proposes three new kinds of ShuffleGhost structure to tackle the drawbacks of GhostNet. The ShuffleGhost module and ShuffleGhost bottlenecks are utilized by the shuffle layer and group convolution from ShuffleNet, and they are designed to redistribute the feature maps concatenated from Ghost Feature Map and Primary Feature Map. Besides, they eliminate the gap of them and extract the features. Then, SENet layer is adopted to reduce the computation cost of group convolution, as well as evaluating the importance of the feature maps which concatenated from Ghost Feature Maps and Primary Feature Maps and giving proper weights for the feature maps. This paper conducted some experiments and proved that the ShuffleGhostV3 has smaller trainable parameters and FLOPs with the ensurance of accuracy. And with proper design, it could be more efficient in both GPU and CPU side.

2020 ◽  
Vol 498 (4) ◽  
pp. 5620-5628
Author(s):  
Y Su ◽  
Y Zhang ◽  
G Liang ◽  
J A ZuHone ◽  
D J Barnes ◽  
...  

ABSTRACT The origin of the diverse population of galaxy clusters remains an unexplained aspect of large-scale structure formation and cluster evolution. We present a novel method of using X-ray images to identify cool core (CC), weak cool core (WCC), and non-cool core (NCC) clusters of galaxies that are defined by their central cooling times. We employ a convolutional neural network, ResNet-18, which is commonly used for image analysis, to classify clusters. We produce mock Chandra X-ray observations for a sample of 318 massive clusters drawn from the IllustrisTNG simulations. The network is trained and tested with low-resolution mock Chandra images covering a central 1 Mpc square for the clusters in our sample. Without any spectral information, the deep learning algorithm is able to identify CC, WCC, and NCC clusters, achieving balanced accuracies (BAcc) of 92 per cent, 81 per cent, and 83 per cent, respectively. The performance is superior to classification by conventional methods using central gas densities, with an average ${\rm BAcc}=81{{\ \rm per\ cent}}$, or surface brightness concentrations, giving ${\rm BAcc}=73{{\ \rm per\ cent}}$. We use class activation mapping to localize discriminative regions for the classification decision. From this analysis, we observe that the network has utilized regions from cluster centres out to r ≈ 300 kpc and r ≈ 500 kpc to identify CC and NCC clusters, respectively. It may have recognized features in the intracluster medium that are associated with AGN feedback and disruptive major mergers.


2019 ◽  
Author(s):  
Eva Malta ◽  
Charles Rodamilans ◽  
Sandra Avila ◽  
Edson Borin

This paper analyzes the cost-benefit of using EC2 instances, specif- ically the p2 and p3 virtual machine types, which have GPU accelerators, to execute a machine learning algorithm. This analysis includes the runtime of a convolutional neural network executions, and it takes into consideration the necessary time to stabilize the accuracy value with different batch sizes. Also, we measure the cost of using each machine type, and we define a relation be- tween this cost and the execution time for each virtual machine. The results show that, although the price per hour of the p3 instance is three times bigger, it is faster and costs almost the same as the p2 instance type to train the deep learning algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Weijie Chen ◽  
Xiaoxi Liu ◽  
Lei Qiao ◽  
Jian Wang ◽  
Yanheng Zhao

The traditional classroom has been impacted by the digital teaching resources. Students are no longer satisfied with the traditional teaching mode of teacher teaching and student learning. Combined with the characteristics of a virtual reality-interactive classroom, the design of a virtual reality-interactive classroom based on the deep learning algorithm is proposed. This paper divides the teaching activities of the VR-interactive classroom into two parts: in-class learning activities and after-class learning activities. The software is used to design the interactive test. The emphasis and difficulty in the virtual reality-interactive classroom are taken as the development object to realize the construction of the virtual reality-interactive classroom. The simulation results show that the statistical output of teaching quality evaluation can be obtained from the quantitative regression analysis of the factors involved in VR classroom participation.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 705
Author(s):  
Po-Chou Shih ◽  
Chun-Chin Hsu ◽  
Fang-Chih Tien

Silicon wafer is the most crucial material in the semiconductor manufacturing industry. Owing to limited resources, the reclamation of monitor and dummy wafers for reuse can dramatically lower the cost, and become a competitive edge in this industry. However, defects such as void, scratches, particles, and contamination are found on the surfaces of the reclaimed wafers. Most of the reclaimed wafers with the asymmetric distribution of the defects, known as the “good (G)” reclaimed wafers, can be re-polished if their defects are not irreversible and if their thicknesses are sufficient for re-polishing. Currently, the “no good (NG)” reclaimed wafers must be first screened by experienced human inspectors to determine their re-usability through defect mapping. This screening task is tedious, time-consuming, and unreliable. This study presents a deep-learning-based reclaimed wafers defect classification approach. Three neural networks, multilayer perceptron (MLP), convolutional neural network (CNN) and Residual Network (ResNet), are adopted and compared for classification. These networks analyze the pattern of defect mapping and determine not only the reclaimed wafers are suitable for re-polishing but also where the defect categories belong. The open source TensorFlow library was used to train the MLP, CNN, and ResNet networks using collected wafer images as input data. Based on the experimental results, we found that the system applying CNN networks with a proper design of kernels and structures gave fast and superior performance in identifying defective wafers owing to its deep learning capability, and the ResNet averagely exhibited excellent accuracy, while the large-scale MLP networks also acquired good results with proper network structures.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2557
Author(s):  
Ben Zierdt ◽  
Taichu Shi ◽  
Thomas DeGroat ◽  
Sam Furman ◽  
Nicholas Papas ◽  
...  

Ultraviolet disinfection has been proven to be effective for surface sanitation. Traditional ultraviolet disinfection systems generate omnidirectional radiation, which introduces safety concerns regarding human exposure. Large scale disinfection must be performed without humans present, which limits the time efficiency of disinfection. We propose and experimentally demonstrate a targeted ultraviolet disinfection system using a combination of robotics, lasers, and deep learning. The system uses a laser-galvo and a camera mounted on a two-axis gimbal running a custom deep learning algorithm. This allows ultraviolet radiation to be applied to any surface in the room where it is mounted, and the algorithm ensures that the laser targets the desired surfaces avoids others such as humans. Both the laser-galvo and the deep learning algorithm were tested for targeted disinfection.


2020 ◽  
Vol 13 (1) ◽  
pp. 9
Author(s):  
Herminarto Nugroho ◽  
Meredita Susanty ◽  
Ade Irawan ◽  
Muhamad Koyimatu ◽  
Ariana Yunita

This paper proposes a fully convolutional variational autoencoder (VAE) for features extraction from a large-scale dataset of fire images. The dataset will be used to train the deep learning algorithm to detect fire and smoke. The features extraction is used to tackle the curse of dimensionality, which is the common issue in training deep learning with huge datasets. Features extraction aims to reduce the dimension of the dataset significantly without losing too much essential information. Variational autoencoders (VAEs) are powerfull generative model, which can be used for dimension reduction. VAEs work better than any other methods available for this purpose because they can explore variations on the data in a specific direction.


Author(s):  
Dang Viet Hung ◽  
Ha Manh Hung ◽  
Pham Hoang Anh ◽  
Nguyen Truong Thang

Timely monitoring the large-scale civil structure is a tedious task demanding expert experience and significant economic resources. Towards a smart monitoring system, this study proposes a hybrid deep learning algorithm aiming for structural damage detection tasks, which not only reduces required resources, including computational complexity, data storage but also has the capability to deal with different damage levels. The technique combines the ability to capture local connectivity of Convolution Neural Network and the well-known performance in accounting for long-term dependencies of Long-Short Term Memory network, into a single end-to-end architecture using directly raw acceleration time-series without requiring any signal preprocessing step. The proposed approach is applied to a series of experimentally measured vibration data from a three-story frame and successful in providing accurate damage identification results. Furthermore, parametric studies are carried out to demonstrate the robustness of this hybrid deep learning method when facing data corrupted by random noises, which is unavoidable in reality. Keywords: structural damage detection; deep learning algorithm; vibration; sensor; signal processing.


2021 ◽  
Vol 12 ◽  
Author(s):  
Suk-Young Kim ◽  
Taesung Park ◽  
Kwonyoung Kim ◽  
Jihoon Oh ◽  
Yoonjae Park ◽  
...  

Purpose: The number of patients with alcohol-related problems is steadily increasing. A large-scale survey of alcohol-related problems has been conducted. However, studies that predict hazardous drinkers and identify which factors contribute to the prediction are limited. Thus, the purpose of this study was to predict hazardous drinkers and the severity of alcohol-related problems of patients using a deep learning algorithm based on a large-scale survey data.Materials and Methods: Datasets of National Health and Nutrition Examination Survey of South Korea (K-NHANES), a nationally representative survey for the entire South Korean population, were used to train deep learning and conventional machine learning algorithms. Datasets from 69,187 and 45,672 participants were used to predict hazardous drinkers and the severity of alcohol-related problems, respectively. Based on the degree of contribution of each variable to deep learning, it was possible to determine which variable contributed significantly to the prediction of hazardous drinkers.Results: Deep learning showed the higher performance than conventional machine learning algorithms. It predicted hazardous drinkers with an AUC (Area under the receiver operating characteristic curve) of 0.870 (Logistic regression: 0.858, Linear SVM: 0.849, Random forest classifier: 0.810, K-nearest neighbors: 0.740). Among 325 variables for predicting hazardous drinkers, energy intake was a factor showing the greatest contribution to the prediction, followed by carbohydrate intake. Participants were classified into Zone I, Zone II, Zone III, and Zone IV based on the degree of alcohol-related problems, showing AUCs of 0.881, 0.774, 0.853, and 0.879, respectively.Conclusion: Hazardous drinking groups could be effectively predicted and individuals could be classified according to the degree of alcohol-related problems using a deep learning algorithm. This algorithm could be used to screen people who need treatment for alcohol-related problems among the general population or hospital visitors.


Author(s):  
Pinki and Prof. Sachin Garg

In the present scenario due to Covid-19, there is no efficient face mask detection applications which are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. This system can therefore be used in real-time applications which require face-mask detection for safety purposes due to the outbreak of Covid-19. This project can be integrated with embedded systems for application in airports, railway stations, offices, schools, and public places to ensure that public safety guidelines are followed. To identify the person on image/video stream wearing face mask or not. If the person doesn’t wear a mask, the notification will be sent to the respected admin with the help of Python and deep learning algorithm by using the Convolutional Neural Network, Keras Framework and OpenCV.


Sign in / Sign up

Export Citation Format

Share Document