neural network models
Recently Published Documents





2022 ◽  
Vol 40 (3) ◽  
pp. 1-30
Zhiwen Xie ◽  
Runjie Zhu ◽  
Kunsong Zhao ◽  
Jin Liu ◽  
Guangyou Zhou ◽  

Cross-lingual entity alignment has attracted considerable attention in recent years. Past studies using conventional approaches to match entities share the common problem of missing important structural information beyond entities in the modeling process. This allows graph neural network models to step in. Most existing graph neural network approaches model individual knowledge graphs (KGs) separately with a small amount of pre-aligned entities served as anchors to connect different KG embedding spaces. However, this characteristic can cause several major problems, including performance restraint due to the insufficiency of available seed alignments and ignorance of pre-aligned links that are useful in contextual information in-between nodes. In this article, we propose DuGa-DIT, a dual gated graph attention network with dynamic iterative training, to address these problems in a unified model. The DuGa-DIT model captures neighborhood and cross-KG alignment features by using intra-KG attention and cross-KG attention layers. With the dynamic iterative process, we can dynamically update the cross-KG attention score matrices, which enables our model to capture more cross-KG information. We conduct extensive experiments on two benchmark datasets and a case study in cross-lingual personalized search. Our experimental results demonstrate that DuGa-DIT outperforms state-of-the-art methods.

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 637
Maciej Zaborowicz ◽  
Katarzyna Zaborowicz ◽  
Barbara Biedziak ◽  
Tomasz Garbowski

Dental age is one of the most reliable methods for determining a patient’s age. The timing of teething, the period of tooth replacement, or the degree of tooth attrition is an important diagnostic factor in the assessment of an individual’s developmental age. It is used in orthodontics, pediatric dentistry, endocrinology, forensic medicine, and pathomorphology, but also in scenarios regarding international adoptions and illegal immigrants. The methods used to date are time-consuming and not very precise. For this reason, artificial intelligence methods are increasingly used to estimate the age of a patient. The present work is a continuation of the work of Zaborowicz et al. In the presented research, a set of 21 original indicators was used to create deep neural network models. The aim of this study was to verify the ability to generate a more accurate deep neural network model compared to models produced previously. The quality parameters of the produced models were as follows. The MAE error of the produced models, depending on the learning set used, was between 2.34 and 4.61 months, while the RMSE error was between 5.58 and 7.49 months. The correlation coefficient R2 ranged from 0.92 to 0.96.

2022 ◽  
Vol 12 (2) ◽  
pp. 850
Sungchul Lee ◽  
Eunmin Hwang ◽  
Yanghee Kim ◽  
Fatih Demir ◽  
Hyunhwa Lee ◽  

With the prevalence of obesity in adolescents, and its long-term influence on their overall health, there is a large body of research exploring better ways to reduce the rate of obesity. A traditional way of maintaining an adequate body mass index (BMI), calculated by measuring the weight and height of an individual, is no longer enough, and we are in need of a better health care tool. Therefore, the current research proposes an easier method that offers instant and real-time feedback to the users from the data collected from the motion sensors of a smartphone. The study utilized the mHealth application to identify participants presenting the walking movements of the high BMI group. Using the feedforward deep learning models and convolutional neural network models, the study was able to distinguish the walking movements between nonobese and obese groups, at a rate of 90.5%. The research highlights the potential use of smartphones and suggests the mHealth application as a way to monitor individual health.

Computers ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 13
Imran Zualkernan ◽  
Salam Dhou ◽  
Jacky Judas ◽  
Ali Reza Sajun ◽  
Brylle Ryan Gomez ◽  

Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning.

eLife ◽  
2022 ◽  
Vol 11 ◽  
Baohua Zhou ◽  
Zifan Li ◽  
Sunnie Kim ◽  
John Lafferty ◽  
Damon A Clark

Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.

2022 ◽  
Vol 15 ◽  
Min-seok Kim ◽  
Joon Hyuk Cha ◽  
Seonhwa Lee ◽  
Lihong Han ◽  
Wonhyoung Park ◽  

There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.

2022 ◽  
Leon Faure ◽  
Bastien Mollet ◽  
Wolfram Liebermeister ◽  
Jean-Loup Faulon

Metabolic networks have largely been exploited as mechanistic tools to predict the behavior of microorganisms with a defined genotype in different environments. However, flux predictions by constraint-based modeling approaches are limited in quality unless labor-intensive experiments including the measurement of media intake fluxes, are performed. Using machine learning instead of an optimization of biomass flux - on which most existing constraint-based methods are based - provides ways to improve flux and growth rate predictions. In this paper, we show how Recurrent Neural Networks can surrogate constraint-based modeling and make metabolic networks suitable for backpropagation and consequently be used as an architecture for machine learning. We refer to our hybrid - mechanistic and neural network - models as Artificial Metabolic Networks (AMN). We showcase AMN and illustrate its performance with an experimental dataset of Escherichia coli growth rates in 73 different media compositions. We reach a regression coefficient of R2=0.78 on cross-validation sets. We expect AMNs to provide easier discovery of metabolic insights and prompt new biotechnological applications.

2022 ◽  
Sertaç Yaman ◽  
Barış Karakaya ◽  
yavuz erol

Abstract COVID-19 is still a fatal disease, which has threatened all people by affecting the human lungs. Chest X-Ray or computed tomography (CT) imaging is commonly used to make a fast and reliable medical investigation to detect the COVID-19 virus from these medical images is remarkably challenging because it is a full-time job and prone to human errors. In this paper, a new normalization algorithm that consists of Mean-Variance-Softmax-Rescale (MVSR) processes respectively is proposed to provide facilitation pre-assessment and diagnosis Covid-19 disease. In order to show the effect of MVSR normalization technique on image processing, the algorithm is applied to chest X-ray images. Therefore, the normalized X-ray images with MVSR are used to recognize via one of the neural network models as known Convolutional Neural Networks (CNNs). At the implementation stage, the MVSR algorithm is executed on MATLAB environment, then it is implemented on FPGA platform. All the arithmetic operations of the MVSR normalization are coded in VHDL with the help of fixed-point fractional number representation format. The experimental platform consists of Zynq-7000 Development Board and VGA monitor to display the both original X-ray and MVSR normalized image. The CNN model is constructed and executed using Anaconda Navigator interface with python language. Based on the results of this study, infections of Covid-19 disease can be easily diagnosed for MVSR normalized image. The proposed MVSR normalization makes the accuracy of CNN model increase from 83.01%, to 96.16% for binary class of chest X-ray images.

2022 ◽  
Vol 12 ◽  
Ryo Fujiwara ◽  
Hiroyuki Nashida ◽  
Midori Fukushima ◽  
Naoya Suzuki ◽  
Hiroko Sato ◽  

Evaluation of the legume proportion in grass-legume mixed swards is necessary for breeding and for cultivation research of forage. For objective and time-efficient estimation of legume proportion, convolutional neural network (CNN) models were trained by fine-tuning the GoogLeNet to estimate the coverage of timothy (TY), white clover (WC), and background (Bg) on the unmanned aerial vehicle-based images. The accuracies of the CNN models trained on different datasets were compared using the mean bias error and the mean average error. The models predicted the coverage with small errors when the plots in the training datasets were similar to the target plots in terms of coverage rate. The models that are trained on datasets of multiple plots had smaller errors than those trained on datasets of a single plot. The CNN models estimated the WC coverage more precisely than they did to the TY and the Bg coverages. The correlation coefficients (r) of the measured coverage for aerial images vs. estimated coverage were 0.92–0.96, whereas those of the scored coverage by a breeder vs. estimated coverage were 0.76–0.93. These results indicate that CNN models are helpful in effectively estimating the legume coverage.

2022 ◽  
Selcuk Cankurt ◽  
Abdulhamit Subasi

AbstractOver the last decades, several soft computing techniques have been applied to tourism demand forecasting. Among these techniques, a neuro-fuzzy model of ANFIS (adaptive neuro-fuzzy inference system) has started to emerge. A conventional ANFIS model cannot deal with the large dimension of a dataset, and cannot work with our dataset, which is composed of a 62 time-series, as well. This study attempts to develop an ensemble model by incorporating neural networks with ANFIS to deal with a large number of input variables for multivariate forecasting. Our proposed approach is a collaboration of two base learners, which are types of the neural network models and a meta-learner of ANFIS in the framework of the stacking ensemble. The results show that the stacking ensemble of ANFIS (meta-learner) and ANN models (base learners) outperforms its stand-alone counterparts of base learners. Numerical results indicate that the proposed ensemble model achieved a MAPE of 7.26% compared to its single-instance ANN models with MAPEs of 8.50 and 9.18%, respectively. Finally, this study which is a novel application of the ensemble systems in the context of tourism demand forecasting has shown better results compared to those of the single expert systems based on the artificial neural networks.

Sign in / Sign up

Export Citation Format

Share Document