scholarly journals Economic Structure Analysis Based on Neural Network and Bionic Algorithm

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanjun Dai ◽  
Lin Su

In this article, an in-depth study and analysis of economic structure are carried out using a neural network fusion release algorithm. The method system defines the weight space and structure space of neural networks from the perspective of optimization theory, proposes a bionic optimization algorithm under the weight space and structure space, and establishes a neuroevolutionary method with shallow neural network and deep neural network as the research objects. In the shallow neuroevolutionary, the improved genetic algorithm (IGA) based on elite heuristic operation and migration strategy and the improved coyote optimization algorithm (ICOA) based on adaptive influence weights are proposed, and the shallow neuroevolutionary method based on IGA and the shallow neuroevolutionary method based on ICOA are applied to the weight space of backpropagation (BP) neural networks. In deep neuroevolutionary method, the structure space of convolutional neural network is proposed to solve the search space design of neural structure search (NAS), and the GA-based deep neuroevolutionary method under the structure space of convolutional neural network is proposed to solve the problem that numerous hyperparameters and network structure parameters can produce explosive combinations when designing deep learning models. The neural network fusion bionic algorithm used has the application value of exploring the spatial structure and dynamics of the socioeconomic system, improving the perception of the socioeconomic situation, and understanding the development law of society, etc. The idea is also verifiable through the present computer technology.

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Keqin Chen ◽  
Amit Yadav ◽  
Asif Khan ◽  
Yixin Meng ◽  
Kun Zhu

Concrete cracks are very serious and potentially dangerous. There are three obvious limitations existing in the present machine learning methods: low recognition rate, low accuracy, and long time. Improved crack detection based on convolutional neural networks can automatically detect whether an image contains cracks and mark the location of the cracks, which can greatly improve the monitoring efficiency. Experimental results show that the Adam optimization algorithm and batch normalization (BN) algorithm can make the model converge faster and achieve the maximum accuracy of 99.71%.


Author(s):  
Qingyu Tian ◽  
Mao Ding ◽  
Hui Yang ◽  
Caibin Yue ◽  
Yue Zhong ◽  
...  

Background: Drug development requires a lot of money and time, and the outcome of the challenge is unknown. So, there is an urgent need for researchers to find a new approach that can reduce costs. Therefore, the identification of drug-target interactions (DTIs) has been a critical step in the early stages of drug discovery. These computational methods aim to narrow the search space for novel DTIs and to elucidate the functional background of drugs. Most of the methods developed so far use binary classification to predict the presence or absence of interactions between the drug and the target. However, it is more informative, but also more challenging, to predict the strength of the binding between a drug and its target. If the strength is not strong enough, such a DTI may not be useful. Hence, the development of methods to predict drug-target affinity (DTA) is of significant importance. Method: We have improved the Graph DTA model from a dual-channel model to a triple-channel model. We interpreted the target/protein sequences as time series and extracted their features using the LSTM network. For the drug, we considered both the molecular structure and the local chemical background, retaining the four variant networks used in Graph DTA to extract the topological features of the drug and capturing the local chemical background of the atoms in the drug by using BiGRU. Thus, we obtained the latent features of the target and two latent features of the drug. The connection of these three feature vectors is then input into a 2-layer FC network, and a valuable binding affinity is output. Result: We use the Davis and Kiba datasets, using 80% of the data for training and 20% of the data for validation. Our model shows better performance by comparing it with the experimental results of Graph DTA. Conclusion: In this paper, we altered the Graph DTA model to predict drug-target affinity. It represents the drug as a graph, and extracts the two-dimensional drug information using a graph convolutional neural network. Simultaneously, the drug and protein targets are represented as a word vector, and the convolutional neural network is used to extract the time series information of the drug and the target. We demonstrate that our improved method has better performance than the original method. In particular, our model has better performance in the evaluation of benchmark databases.


2020 ◽  
Vol 25 (4) ◽  
pp. 42-51
Author(s):  
Sineglazov V.M. ◽  
◽  
Chumachenko O.I. ◽  

The structural-parametric synthesis of neural networks of deep learning, in particular convolutional neural networks used in image processing, is considered. The classification of modern architectures of convolutional neural networks is given. It is shown that almost every convolutional neural network, depending on its topology, has unique blocks that determine its essential features (for example, Squeeze and Excitation Block, Convolutional Block of Attention Module (Channel attention module, Spatial attention module), Residual block, Inception module, ResNeXt block. It is stated the problem of structural-parametric synthesis of convolutional neural networks, for the solution of which it is proposed to use a genetic algorithm. The genetic algorithm is used to effectively overcome a large search space: on the one hand, to generate possible topologies of the convolutional neural network, namely the choice of specific blocks and their locations in the structure of the convolutional neural network, and on the other hand to solve the problem of structural-parametric synthesis of convolutional neural network of selected topology. The most significant parameters of the convolutional neural network are determined. An encoding method is proposed that allows to repre- sent each network structure in the form of a string of fixed length in binary format. After that, several standard genetic operations were identified, i.e. selection, mutation and crossover, which eliminate weak individuals of the previous generation and use them to generate competitive ones. An example of solving this problem is given, a database (ultrasound results) of patients with thyroid disease was used as a training sample.


2020 ◽  
Vol 65 (6) ◽  
pp. 759-773
Author(s):  
Segu Praveena ◽  
Sohan Pal Singh

AbstractLeukaemia detection and diagnosis in advance is the trending topic in the medical applications for reducing the death toll of patients with acute lymphoblastic leukaemia (ALL). For the detection of ALL, it is essential to analyse the white blood cells (WBCs) for which the blood smear images are employed. This paper proposes a new technique for the segmentation and classification of the acute lymphoblastic leukaemia. The proposed method of automatic leukaemia detection is based on the Deep Convolutional Neural Network (Deep CNN) that is trained using an optimization algorithm, named Grey wolf-based Jaya Optimization Algorithm (GreyJOA), which is developed using the Grey Wolf Optimizer (GWO) and Jaya Optimization Algorithm (JOA) that improves the global convergence. Initially, the input image is applied to pre-processing and the segmentation is performed using the Sparse Fuzzy C-Means (Sparse FCM) clustering algorithm. Then, the features, such as Local Directional Patterns (LDP) and colour histogram-based features, are extracted from the segments of the pre-processed input image. Finally, the extracted features are applied to the Deep CNN for the classification. The experimentation evaluation of the method using the images of the ALL IDB2 database reveals that the proposed method acquired a maximal accuracy, sensitivity, and specificity of 0.9350, 0.9528, and 0.9389, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Changming Wu ◽  
Heshan Yu ◽  
Seokhyeong Lee ◽  
Ruoming Peng ◽  
Ichiro Takeuchi ◽  
...  

AbstractNeuromorphic photonics has recently emerged as a promising hardware accelerator, with significant potential speed and energy advantages over digital electronics for machine learning algorithms, such as neural networks of various types. Integrated photonic networks are particularly powerful in performing analog computing of matrix-vector multiplication (MVM) as they afford unparalleled speed and bandwidth density for data transmission. Incorporating nonvolatile phase-change materials in integrated photonic devices enables indispensable programming and in-memory computing capabilities for on-chip optical computing. Here, we demonstrate a multimode photonic computing core consisting of an array of programable mode converters based on on-waveguide metasurfaces made of phase-change materials. The programmable converters utilize the refractive index change of the phase-change material Ge2Sb2Te5 during phase transition to control the waveguide spatial modes with a very high precision of up to 64 levels in modal contrast. This contrast is used to represent the matrix elements, with 6-bit resolution and both positive and negative values, to perform MVM computation in neural network algorithms. We demonstrate a prototypical optical convolutional neural network that can perform image processing and recognition tasks with high accuracy. With a broad operation bandwidth and a compact device footprint, the demonstrated multimode photonic core is promising toward large-scale photonic neural networks with ultrahigh computation throughputs.


2021 ◽  
Vol 21 (01) ◽  
pp. 2150005
Author(s):  
ARUN T NAIR ◽  
K. MUTHUVEL

Nowadays, analysis on retinal image exists as one of the challenging area for study. Numerous retinal diseases could be recognized by analyzing the variations taking place in retina. However, the main disadvantage among those studies is that, they do not have higher recognition accuracy. The proposed framework includes four phases namely, (i) Blood Vessel Segmentation (ii) Feature Extraction (iii) Optimal Feature Selection and (iv) Classification. Initially, the input fundus image is subjected to blood vessel segmentation from which two binary thresholded images (one from High Pass Filter (HPF) and other from top-hat reconstruction) are acquired. These two images are differentiated and the areas that are common to both are said to be the major vessels and the left over regions are fused to form vessel sub-image. These vessel sub-images are classified with Gaussian Mixture Model (GMM) classifier and the resultant is summed up with the major vessels to form the segmented blood vessels. The segmented images are subjected to feature extraction process, where the features like proposed Local Binary Pattern (LBP), Gray-Level Co-Occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) are extracted. As the curse of dimensionality seems to be the greatest issue, it is important to select the appropriate features from the extracted one for classification. In this paper, a new improved optimization algorithm Moth Flame with New Distance Formulation (MF-NDF) is introduced for selecting the optimal features. Finally, the selected optimal features are subjected to Deep Convolutional Neural Network (DCNN) model for classification. Further, in order to make the precise diagnosis, the weights of DCNN are optimally tuned by the same optimization algorithm. The performance of the proposed algorithm will be compared against the conventional algorithms in terms of positive and negative measures.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


Author(s):  
Ramesh Adhikari ◽  
Suresh Pokharel

Data augmentation is widely used in image processing and pattern recognition problems in order to increase the richness in diversity of available data. It is commonly used to improve the classification accuracy of images when the available datasets are limited. Deep learning approaches have demonstrated an immense breakthrough in medical diagnostics over the last decade. A significant amount of datasets are needed for the effective training of deep neural networks. The appropriate use of data augmentation techniques prevents the model from over-fitting and thus increases the generalization capability of the network while testing afterward on unseen data. However, it remains a huge challenge to obtain such a large dataset from rare diseases in the medical field. This study presents the synthetic data augmentation technique using Generative Adversarial Networks to evaluate the generalization capability of neural networks using existing data more effectively. In this research, the convolutional neural network (CNN) model is used to classify the X-ray images of the human chest in both normal and pneumonia conditions; then, the synthetic images of the X-ray from the available dataset are generated by using the deep convolutional generative adversarial network (DCGAN) model. Finally, the CNN model is trained again with the original dataset and augmented data generated using the DCGAN model. The classification performance of the CNN model is improved by 3.2% when the augmented data were used along with the originally available dataset.


2020 ◽  
Vol 10 (6) ◽  
pp. 2104
Author(s):  
Michał Tomaszewski ◽  
Paweł Michalski ◽  
Jakub Osuchowski

This article presents an analysis of the effectiveness of object detection in digital images with the application of a limited quantity of input. The possibility of using a limited set of learning data was achieved by developing a detailed scenario of the task, which strictly defined the conditions of detector operation in the considered case of a convolutional neural network. The described solution utilizes known architectures of deep neural networks in the process of learning and object detection. The article presents comparisons of results from detecting the most popular deep neural networks while maintaining a limited training set composed of a specific number of selected images from diagnostic video. The analyzed input material was recorded during an inspection flight conducted along high-voltage lines. The object detector was built for a power insulator. The main contribution of the presented papier is the evidence that a limited training set (in our case, just 60 training frames) could be used for object detection, assuming an outdoor scenario with low variability of environmental conditions. The decision of which network will generate the best result for such a limited training set is not a trivial task. Conducted research suggests that the deep neural networks will achieve different levels of effectiveness depending on the amount of training data. The most beneficial results were obtained for two convolutional neural networks: the faster region-convolutional neural network (faster R-CNN) and the region-based fully convolutional network (R-FCN). Faster R-CNN reached the highest AP (average precision) at a level of 0.8 for 60 frames. The R-FCN model gained a worse AP result; however, it can be noted that the relationship between the number of input samples and the obtained results has a significantly lower influence than in the case of other CNN models, which, in the authors’ assessment, is a desired feature in the case of a limited training set.


Sign in / Sign up

Export Citation Format

Share Document