scholarly journals Automatic Assessment of Buildings Location Fitness for Solar Panels Installation Using Drones and Neural Network

CivilEng ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 1052-1064
Author(s):  
Ammar Alzarrad ◽  
Chance Emanuels ◽  
Mohammad Imtiaz ◽  
Haseeb Akbar

Solar panel location assessment is usually a time-consuming manual process, and many criteria should be taken into consideration before deciding. One of the most significant criteria is the building location and surrounding environment. This research project aims to propose a model to automatically identify potential roof spaces for solar panels using drones and convolutional neural networks (CNN). Convolutional neural networks (CNNs) are used to identify buildings’ roofs from drone imagery. Transfer learning on the CNN is used to classify roofs of buildings into two categories of shaded and unshaded. The CNN is trained and tested on separate imagery databases to improve classification accuracy. Results of the current project demonstrate successful segmentation of buildings and identification of shaded roofs. The model presented in this paper can be used to prioritize the buildings based on the likelihood of getting benefits from switching to solar energy. To illustrate an implementation of the presented model, it has been applied to a selected neighborhood in the city of Hurricane in West Virginia. The research results show that the proposed model can assist investors in the energy and building sectors to make better and more informed decisions.

Author(s):  
Ankita Singh ◽  
◽  
Pawan Singh

The Classification of images is a paramount topic in artificial vision systems which have drawn a notable amount of interest over the past years. This field aims to classify an image, which is an input, based on its visual content. Currently, most people relied on hand-crafted features to describe an image in a particular way. Then, using classifiers that are learnable, such as random forest, and decision tree was applied to the extract features to come to a final decision. The problem arises when large numbers of photos are concerned. It becomes a too difficult problem to find features from them. This is one of the reasons that the deep neural network model has been introduced. Owing to the existence of Deep learning, it can become feasible to represent the hierarchical nature of features using a various number of layers and corresponding weight with them. The existing image classification methods have been gradually applied in real-world problems, but then there are various problems in its application processes, such as unsatisfactory effect and extremely low classification accuracy or then and weak adaptive ability. Models using deep learning concepts have robust learning ability, which combines the feature extraction and the process of classification into a whole which then completes an image classification task, which can improve the image classification accuracy effectively. Convolutional Neural Networks are a powerful deep neural network technique. These networks preserve the spatial structure of a problem and were built for object recognition tasks such as classifying an image into respective classes. Neural networks are much known because people are getting a state-of-the-art outcome on complex computer vision and natural language processing tasks. Convolutional neural networks have been extensively used.


Author(s):  
Jingyun Xu ◽  
Yi Cai

Some text classification methods don’t work well on short texts due to the data sparsity. What’s more, they don’t fully exploit context-relevant knowledge. In order to tackle these problems, we propose a neural network to incorporate context-relevant knowledge into a convolutional neural network for short text classification. Our model consists of two modules. The first module utilizes two layers to extract concept and context features respectively and then employs an attention layer to extract those context-relevant concepts. The second module utilizes a convolutional neural network to extract high-level features from the word and the contextrelevant concept features. The experimental results on three datasets show that our proposed model outperforms the stateof-the-art models.


Author(s):  
Mohannad Elhamod ◽  
Kelly M. Diamond ◽  
A. Murat Maga ◽  
Yasin Bakis ◽  
Henry L. Bart ◽  
...  

AbstractFish species classification is an important task that is the foundation of many industrial, commercial, ecological, and scientific applications involving the study of fish distributions, dynamics, and evolution.While conventional approaches for this task use off-the-shelf machine learning (ML) methods such as existing Convolutional Neural Network (ConvNet) architectures, there is an opportunity to inform the ConvNet architecture using our knowledge of biological hierarchies among taxonomic classes.In this work, we propose infusing phylogenetic information into the model’s training to guide its structure and relationships among the extracted features. In our extensive experimental analyses, the proposed model, named Hierarchy-Guided Neural Network (HGNN), outperforms conventional ConvNet models in terms of classification accuracy under scarce training data conditions.We also observe that HGNN shows better resilience to adversarial occlusions, when some of the most informative patch regions of the image are intentionally blocked and their effect on classification accuracy is studied.


2020 ◽  
Vol 224 (1) ◽  
pp. 191-198
Author(s):  
Xinliang Liu ◽  
Tao Ren ◽  
Hongfeng Chen ◽  
Yufeng Chen

SUMMARY In this paper, convolutional neural networks (CNNs) were used to distinguish between tectonic and non-tectonic seismicity. The proposed CNNs consisted of seven convolutional layers with small kernels and one fully connected layer, which only relied on the acoustic waveform without extracting features manually. For a single station, the accuracy of the model was 0.90, and the event accuracy could reach 0.93. The proposed model was tested using data from January 2019 to August 2019 in China. The event accuracy could reach 0.92, showing that the proposed model could distinguish between tectonic and non-tectonic seismicity.


2020 ◽  
Vol 36 (5) ◽  
pp. 743-749
Author(s):  
Xingwang Li ◽  
Xiaofei Fan ◽  
Lili Zhao ◽  
Sheng Huang ◽  
Yi He ◽  
...  

HighlightsThis study revealed the feasibility of to classify pepper seed varieties using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN).Convolutional neural networks were adopted to develop models for prediction of seed varieties, and the performance was compared with KNN and SVM.In this experiment, the classification effect of the SVM classification model is the best, but the 1D-CNN classification model is relatively easy to implement.Abstract. When non-seed materials are mixed in seeds or seed varieties of low value are mixed in high value varieties, it will cause losses to growers or businesses. Thus, the successful discrimination of seed varieties is critical for improvement of seed ralue. In recent years, convolutional neural networks (CNNs) have been used in classification of seed varieties. The feasibility of using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN) to classify pepper seed varieties was studied. The total number of three varieties of samples was 1472, and the average spectral curve between 365nm and 970nm of the three varieties was studied. The data were analyzed using full bands of the spectrum or the feature bands selected by successive projection algorithm (SPA). SPA extracted 9 feature bands from 19 bands (430, 450, 470, 490, 515, 570, 660, 780, and 880 nm). The classification accuracy of the three classification models developed with full band using K nearest neighbors (KNN), support vector machine (SVM), and 1D-CNN were 85.81%, 97.70%, and 90.50%, respectively. With full bands, SVM and 1D-CNN performed significantly better than KNN, and SVM performed slightly better than 1D-CNN. With feature bands, the testing accuracies of SVM and 1D-CNN were 97.30% and 92.6%, respectively. Although the classification accuracy of 1D-CNN was not the highest, the ease of operation made it the most feasible method for pepper seed variety prediction. Keywords: Multispectral imaging, One-dimensional convolutional neural network, Pepper seed, Variety classification.


2020 ◽  
Author(s):  
Yakoop Razzaz Hamoud Qasim ◽  
Habeb Abdulkhaleq Mohammed Hassan ◽  
Abdulelah Abdulkhaleq Mohammed Hassan

In this paper we present a Convolutional Neural Network consisting of NASNet and MobileNet in parallel (concatenation) to classify three classes COVID-19, normal and pneumonia, depending on a dataset of 1083 x-ray images divided into 361 images for each class. VGG16 and RESNet152-v2 models were also prepared and trained on the same dataset to compare performance of the proposed model with their performance. After training the networks and evaluating their performance, an overall accuracy of 96.91%for the proposed model, 92.59% for VGG16 model and 94.14% for RESNet152. We obtained accuracy, sensitivity, specificity and precision of 99.69%, 99.07%, 100% and 100% respectively for the proposed model related to the COVID-19 class. These results were better than the results of other models. The conclusion, neural networks are built from models in parallel are most effective when the data available for training are small and the features of different classes are similar.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


Author(s):  
Sebastian Nowak ◽  
Narine Mesropyan ◽  
Anton Faron ◽  
Wolfgang Block ◽  
Martin Reuter ◽  
...  

Abstract Objectives To investigate the diagnostic performance of deep transfer learning (DTL) to detect liver cirrhosis from clinical MRI. Methods The dataset for this retrospective analysis consisted of 713 (343 female) patients who underwent liver MRI between 2017 and 2019. In total, 553 of these subjects had a confirmed diagnosis of liver cirrhosis, while the remainder had no history of liver disease. T2-weighted MRI slices at the level of the caudate lobe were manually exported for DTL analysis. Data were randomly split into training, validation, and test sets (70%/15%/15%). A ResNet50 convolutional neural network (CNN) pre-trained on the ImageNet archive was used for cirrhosis detection with and without upstream liver segmentation. Classification performance for detection of liver cirrhosis was compared to two radiologists with different levels of experience (4th-year resident, board-certified radiologist). Segmentation was performed using a U-Net architecture built on a pre-trained ResNet34 encoder. Differences in classification accuracy were assessed by the χ2-test. Results Dice coefficients for automatic segmentation were above 0.98 for both validation and test data. The classification accuracy of liver cirrhosis on validation (vACC) and test (tACC) data for the DTL pipeline with upstream liver segmentation (vACC = 0.99, tACC = 0.96) was significantly higher compared to the resident (vACC = 0.88, p < 0.01; tACC = 0.91, p = 0.01) and to the board-certified radiologist (vACC = 0.96, p < 0.01; tACC = 0.90, p < 0.01). Conclusion This proof-of-principle study demonstrates the potential of DTL for detecting cirrhosis based on standard T2-weighted MRI. The presented method for image-based diagnosis of liver cirrhosis demonstrated expert-level classification accuracy. Key Points • A pipeline consisting of two convolutional neural networks (CNNs) pre-trained on an extensive natural image database (ImageNet archive) enables detection of liver cirrhosis on standard T2-weighted MRI. • High classification accuracy can be achieved even without altering the pre-trained parameters of the convolutional neural networks. • Other abdominal structures apart from the liver were relevant for detection when the network was trained on unsegmented images.


2020 ◽  
Vol 8 (4) ◽  
pp. 469
Author(s):  
I Gusti Ngurah Alit Indrawan ◽  
I Made Widiartha

Artificial Neural Networks or commonly abbreviated as ANN is one branch of science from the field of artificial intelligence which is often used to solve various problems in fields that involve grouping and pattern recognition. This research aims to classify Letter Recognition datasets using Artificial Neural Networks which are weighted optimally using the Artificial Bee Colony algorithm. The best classification accuracy results from this study were 92.85% using a combination of 4 hidden layers with each hidden layer containing 10 neurons.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-22
Author(s):  
Mengyao Lu ◽  
Shuwen Jiang ◽  
Cong Wang ◽  
Dong Chen ◽  
Tian’en Chen

HighlightsA classification model for the front and back sides of tobacco leaves was developed for application in industry.A tobacco leaf grading method that combines a CNN with double-branch integration was proposed.The A-ResNet network was proposed and compared with other classic CNN networks.The grading accuracy of eight different grades was 91.30% and the testing time was 82.180 ms, showing a relatively high classification accuracy and efficiency.Abstract. Flue-cured tobacco leaf grading is a key step in the production and processing of Chinese-style cigarette raw materials, directly affecting cigarette blend and quality stability. At present, manual grading of tobacco leaves is dominant in China, resulting in unsatisfactory grading quality and consuming considerable material and financial resources. In this study, for fast, accurate, and non-destructive tobacco leaf grading, 2,791 flue-cured tobacco leaves of eight different grades in south Anhui Province, China, were chosen as the study sample, and a tobacco leaf grading method that combines convolutional neural networks and double-branch integration was proposed. First, a classification model for the front and back sides of tobacco leaves was trained by transfer learning. Second, two processing methods (equal-scaled resizing and cropping) were used to obtain global images and local patches from the front sides of tobacco leaves. A global image-based tobacco leaf grading model was then developed using the proposed A-ResNet-65 network, and a local patch-based tobacco leaf grading model was developed using the ResNet-34 network. These two networks were compared with classic deep learning networks, such as VGGNet, GoogLeNet-V3, and ResNet. Finally, the grading results of the two grading models were integrated to realize tobacco leaf grading. The tobacco leaf classification accuracy of the final model, for eight different grades, was 91.30%, and grading of a single tobacco leaf required 82.180 ms. The proposed method achieved a relatively high grading accuracy and efficiency. It provides a method for industrial implementation of the tobacco leaf grading and offers a new approach for the quality grading of other agricultural products. Keywords: Convolutional neural network, Deep learning, Image classification, Transfer learning, Tobacco leaf grading


Sign in / Sign up

Export Citation Format

Share Document