Classification of Microchannel Flame Regimes Based on Convolutional Neural Networks

2021 ◽  
Author(s):  
Seyed Navid Roohani Isfahani ◽  
Vinicius M. Sauer ◽  
Ingmar Schoegl

Abstract Micro-combustion has shown significant potential to study and characterize the combustion behavior of hydrocarbon fuels. Among several experimental approaches based on this method, the most prominent one employs an externally heated micro-channel. Three distinct combustion regimes are reported for this device namely, weak flames, flames with repetitive extinction and ignition (FREI), and normal flames, which are formed at low, moderate, and high flow rate ranges, respectively. Within each flame regime, noticeable differences exist in both shape and luminosity where transition points can be used to obtain insights into fuel characteristics. In this study, flame images are obtained using a monochrome camera equipped with a 430 nm bandpass filter to capture the chemiluminescence signal emitted by the flame. Sequences of conventional flame photographs are taken during the experiment, which are computationally merged to generate high dynamic range (HDR) images. In a highly diluted fuel/oxidizer mixture, it is observed that FREI disappear and are replaced by a gradual and direct transition between weak and normal flames which makes it hard to identify different combustion regimes. To resolve the issue, a convolutional neural network (CNN) is introduced to classify the flame regime. The accuracy of the model is calculated to be 99.34, 99.66, and 99.83% for “training”, “validation”, and “testing” data-sets, respectively. This level of accuracy is achieved by conducting a grid search to acquire optimized parameters for CNN. Furthermore, a data augmentation technique based on different experimental scenarios is used to generate flame images to increase the size of the data-set.

2018 ◽  
Vol 17 ◽  
pp. 153303381880278 ◽  
Author(s):  
Sarmad Shafique ◽  
Samabia Tehsin

Leukemia is a fatal disease of white blood cells which affects the blood and bone marrow in human body. We deployed deep convolutional neural network for automated detection of acute lymphoblastic leukemia and classification of its subtypes into 4 classes, that is, L1, L2, L3, and Normal which were mostly neglected in previous literature. In contrary to the training from scratch, we deployed pretrained AlexNet which was fine-tuned on our data set. Last layers of the pretrained network were replaced with new layers which can classify the input images into 4 classes. To reduce overtraining, data augmentation technique was used. We also compared the data sets with different color models to check the performance over different color images. For acute lymphoblastic leukemia detection, we achieved a sensitivity of 100%, specificity of 98.11%, and accuracy of 99.50%; and for acute lymphoblastic leukemia subtype classification the sensitivity was 96.74%, specificity was 99.03%, and accuracy was 96.06%. Unlike the standard methods, our proposed method was able to achieve high accuracy without any need of microscopic image segmentation.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


Plant Disease ◽  
2007 ◽  
Vol 91 (8) ◽  
pp. 1013-1020 ◽  
Author(s):  
David H. Gent ◽  
William W. Turechek ◽  
Walter F. Mahaffee

Sequential sampling models for estimation and classification of the incidence of powdery mildew (caused by Podosphaera macularis) on hop (Humulus lupulus) cones were developed using parameter estimates of the binary power law derived from the analysis of 221 transect data sets (model construction data set) collected from 41 hop yards sampled in Oregon and Washington from 2000 to 2005. Stop lines, models that determine when sufficient information has been collected to estimate mean disease incidence and stop sampling, for sequential estimation were validated by bootstrap simulation using a subset of 21 model construction data sets and simulated sampling of an additional 13 model construction data sets. Achieved coefficient of variation (C) approached the prespecified C as the estimated disease incidence, [Formula: see text], increased, although achieving a C of 0.1 was not possible for data sets in which [Formula: see text] < 0.03 with the number of sampling units evaluated in this study. The 95% confidence interval of the median difference between [Formula: see text] of each yard (achieved by sequential sampling) and the true p of the original data set included 0 for all 21 data sets evaluated at levels of C of 0.1 and 0.2. For sequential classification, operating characteristic (OC) and average sample number (ASN) curves of the sequential sampling plans obtained by bootstrap analysis and simulated sampling were similar to the OC and ASN values determined by Monte Carlo simulation. Correct decisions of whether disease incidence was above or below prespecified thresholds (pt) were made for 84.6 or 100% of the data sets during simulated sampling when stop lines were determined assuming a binomial or beta-binomial distribution of disease incidence, respectively. However, the higher proportion of correct decisions obtained by assuming a beta-binomial distribution of disease incidence required, on average, sampling 3.9 more plants per sampling round to classify disease incidence compared with the binomial distribution. Use of these sequential sampling plans may aid growers in deciding the order in which to harvest hop yards to minimize the risk of a condition called “cone early maturity” caused by late-season infection of cones by P. macularis. Also, sequential sampling could aid in research efforts, such as efficacy trials, where many hop cones are assessed to determine disease incidence.


Author(s):  
Pedro Tomás ◽  
IST TU Lisbon ◽  
Aleksandar Ilic ◽  
Leonel Sousa

When analyzing the neuronal code, neuroscientists usually perform extra-cellular recordings of neuronal responses (spikes). Since the size of the microelectrodes used to perform these recordings is much larger than the size of the cells, responses from multiple neurons are recorded by each micro-electrode. Thus, the obtained response must be classified and evaluated, in order to identify how many neurons were recorded, and to assess which neuron generated each spike. A platform for the mass-classification of neuronal responses is proposed in this chapter, employing data-parallelism for speeding up the classification of neuronal responses. The platform is built in a modular way, supporting multiple web-interfaces, different back-end environments for parallel computing or different algorithms for spike classification. Experimental results on the proposed platform show that even for an unbalanced data set of neuronal responses the execution time was reduced of about 45%. For balanced data sets, the platform may achieve a reduction in execution time equal to the inverse of the number of back-end computational elements.


2007 ◽  
Vol 7 (2) ◽  
pp. 471-483 ◽  
Author(s):  
P. Eriksson ◽  
M. Ekström ◽  
B. Rydberg ◽  
D. P. Murtagh

Abstract. More accurate global measurements of the amount of ice in thicker clouds are needed to validate atmospheric models and sub-mm radiometry can be an important component in this respect. A cloud ice retrieval scheme for the first such instrument in space, Odin-SMR, is presented here. Several advantages of sub-mm observations are shown, such as low influence of particle shape and orientation, and a high dynamic range of the retrievals. In the case of Odin-SMR, only cloud ice above ≈12.5 km can be measured. The present retrieval scheme gives a detection threshold of about 4 g/m2 above 12.5 km and does not saturate even for thickest observed clouds (>500 g/m2). The main retrieval uncertainties are the assumed particle size distribution and cloud inhomogeneity effects. The overall retrieval accuracy is estimated to be ~75%. The retrieval error is judged to have large random components and to be significantly lower than this value for averaged results, but high fixed errors can not be excluded. However, a firm lower value can always be provided. Initial results are found to be consistent with similar Aura MLS retrievals, but show important differences to corresponding data from atmospheric models. This first retrieval algorithm is limited to lowermost Odin-SMR tangent altitudes, and further development should improve the detection threshold and the vertical resolution. It should also be possible to decrease the retrieval uncertainty associated with cloud inhomogeneities by detailed analysis of other data sets.


Author(s):  
Mohamed Elhadi Rahmani ◽  
Abdelmalek Amine ◽  
Reda Mohamed Hamou

Many drugs in modern medicines originate from plants and the first step in drug production, is the recognition of plants needed for this purpose. This article presents a bagging approach for medical plants recognition based on their DNA sequences. In this work, the authors have developed a system that recognize DNA sequences of 14 medical plants, first they divided the 14-class data set into bi class sub-data sets, then instead of using an algorithm to classify the 14-class data set, they used the same algorithm to classify the sub-data sets. By doing so, they have simplified the problem of classification of 14 plants into sub-problems of bi class classification. To construct the subsets, the authors extracted all possible pairs of the 14 classes, so they gave each class more chances to be well predicted. This approach allows the study of the similarity between DNA sequences of a plant with each other plants. In terms of results, the authors have obtained very good results in which the accuracy has been doubled (from 45% to almost 80%). Classification of a new sequence was completed according to majority vote.


Author(s):  
Aydin Ayanzadeh ◽  
Sahand Vahidnia

In this paper, we leverage state of the art models on&nbsp;Imagenet data-sets. We use the pre-trained model and learned&nbsp;weighs to extract the feature from the Dog breeds identification&nbsp;data-set. Afterwards, we applied fine-tuning and dataaugmentation&nbsp;to increase the performance of our test accuracy&nbsp;in classification of dog breeds datasets. The performance of the&nbsp;proposed approaches are compared with the state of the art&nbsp;models of Image-Net datasets such as ResNet-50, DenseNet-121,&nbsp;DenseNet-169 and GoogleNet. we achieved 89.66% , 85.37%&nbsp;84.01% and 82.08% test accuracy respectively which shows thesuperior performance of proposed method to the previous works&nbsp;on Stanford dog breeds datasets.


2021 ◽  
Vol 5 (12) ◽  
pp. 283
Author(s):  
Braden Garretson ◽  
Dan Milisavljevic ◽  
Jack Reynolds ◽  
Kathryn E. Weil ◽  
Bhagya Subrayan ◽  
...  

Abstract Here we present a catalog of 12,993 photometrically-classified supernova-like light curves from the Zwicky Transient Facility, along with candidate host galaxy associations. By training a random forest classifier on spectroscopically classified supernovae from the Bright Transient Survey, we achieve an accuracy of 80% across four supernova classes resulting in a final data set of 8208 Type Ia, 2080 Type II, 1985 Type Ib/c, and 720 SLSN. Our work represents a pathfinder effort to supply massive data sets of supernova light curves with value-added information that can be used to enable population-scale modeling of explosion parameters and investigate host galaxy environments.


2021 ◽  
Vol 7 (2) ◽  
pp. 755-758
Author(s):  
Daniel Wulff ◽  
Mohamad Mehdi ◽  
Floris Ernst ◽  
Jannis Hagenah

Abstract Data augmentation is a common method to make deep learning assessible on limited data sets. However, classical image augmentation methods result in highly unrealistic images on ultrasound data. Another approach is to utilize learning-based augmentation methods, e.g. based on variational autoencoders or generative adversarial networks. However, a large amount of data is necessary to train these models, which is typically not available in scenarios where data augmentation is needed. One solution for this problem could be a transfer of augmentation models between different medical imaging data sets. In this work, we present a qualitative study of the cross data set generalization performance of different learning-based augmentation methods for ultrasound image data. We could show that knowledge transfer is possible in ultrasound image augmentation and that the augmentation partially results in semantically meaningful transfers of structures, e.g. vessels, across domains.


Sign in / Sign up

Export Citation Format

Share Document