scholarly journals Fundus Image Classification Using Convolutional Neural Network

2021 ◽  
Author(s):  
U. Savitha ◽  
Kodali Lahari Chandana ◽  
A. Cathrin Sagayam ◽  
S. Bhuvaneswari

Different eye disease has clinical use in defining of the actual status of eye, in the outcome of the medication and other alternatives in the curative phase. Mainly simplicity, clinical nature are the most important requirements for any classification system. In the existing they used different machine learning techniques to detect only single disease. Whereas deep learning system, which is named as Convolutional neural networks (CNNs) can show hierarchical representing of images between disease eye and normal eye pattern.

2020 ◽  
Author(s):  
R Akshay Dharmapuri

Integration and validation is the most vital part before releasing products to customers in Intel. The validation team qualifies the release based on multiple stages of validation on hardware and software stack. Bugs are raised after execution of test cases on each platform and so similar bugs arise which are filed by the user. There is a immediate concern on this and hence, many issues are closed as duplicates.The main objective is to find these similar bugs for each bug filed and thereby,debug efforts can be reused.Similar bugs are found by term based search using ElasticSearch ,a text search engine and neural network based search where context is considered.Using elasticsearch,scoring algorithms based on driver  versions and platform hierarchy are applied to rank the similar bugs. LSTM neural networks are also incorporated to predict duplicate bugs by considering context of the sentence and thereby, increasing accuracy.


2021 ◽  
Vol 10 (02) ◽  
pp. 07-11
Author(s):  
Kanakaveti Narasimha Dheeraj ◽  
Goutham. R. J ◽  
Arthi. L

Agriculture is said to be the backbone of the economy. Farmers toil hard with different kinds of crops to make good and healthy food for the country. There are more existing systems but uses outdated machine-learning techniques based on RNN( Recurrent neural network) which makes the process slower and more time-consuming. Here We are proposing a new CNN(Convolutional neural network ) based system which is fast and gives accurate results within seconds. CNN is power-efficient and is more suitable for real-time implementation. In this project, we use CNN algorithms which is very much better than the RNN algorithms used in the existing system.More parameters will be taken for the consideration of prediction in the proposed system. And we use Random Forest Regression, Multiple Linear Regression


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2021 ◽  
Author(s):  
Rogini Runghen ◽  
Daniel B Stouffer ◽  
Giulio Valentino Dalla Riva

Collecting network interaction data is difficult. Non-exhaustive sampling and complex hidden processes often result in an incomplete data set. Thus, identifying potentially present but unobserved interactions is crucial both in understanding the structure of large scale data, and in predicting how previously unseen elements will interact. Recent studies in network analysis have shown that accounting for metadata (such as node attributes) can improve both our understanding of how nodes interact with one another, and the accuracy of link prediction. However, the dimension of the object we need to learn to predict interactions in a network grows quickly with the number of nodes. Therefore, it becomes computationally and conceptually challenging for large networks. Here, we present a new predictive procedure combining a graph embedding method with machine learning techniques to predict interactions on the base of nodes' metadata. Graph embedding methods project the nodes of a network onto a---low dimensional---latent feature space. The position of the nodes in the latent feature space can then be used to predict interactions between nodes. Learning a mapping of the nodes' metadata to their position in a latent feature space corresponds to a classic---and low dimensional---machine learning problem. In our current study we used the Random Dot Product Graph model to estimate the embedding of an observed network, and we tested different neural networks architectures to predict the position of nodes in the latent feature space. Flexible machine learning techniques to map the nodes onto their latent positions allow to account for multivariate and possibly complex nodes' metadata. To illustrate the utility of the proposed procedure, we apply it to a large dataset of tourist visits to destinations across New Zealand. We found that our procedure accurately predicts interactions for both existing nodes and nodes newly added to the network, while being computationally feasible even for very large networks. Overall, our study highlights that by exploiting the properties of a well understood statistical model for complex networks and combining it with standard machine learning techniques, we can simplify the link prediction problem when incorporating multivariate node metadata. Our procedure can be immediately applied to different types of networks, and to a wide variety of data from different systems. As such, both from a network science and data science perspective, our work offers a flexible and generalisable procedure for link prediction.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


2018 ◽  
Vol 10 (1) ◽  
pp. 203 ◽  
Author(s):  
Xianming Dou ◽  
Yongguo Yang ◽  
Jinhui Luo

Approximating the complex nonlinear relationships that dominate the exchange of carbon dioxide fluxes between the biosphere and atmosphere is fundamentally important for addressing the issue of climate change. The progress of machine learning techniques has offered a number of useful tools for the scientific community aiming to gain new insights into the temporal and spatial variation of different carbon fluxes in terrestrial ecosystems. In this study, adaptive neuro-fuzzy inference system (ANFIS) and generalized regression neural network (GRNN) models were developed to predict the daily carbon fluxes in three boreal forest ecosystems based on eddy covariance (EC) measurements. Moreover, a comparison was made between the modeled values derived from these models and those of traditional artificial neural network (ANN) and support vector machine (SVM) models. These models were also compared with multiple linear regression (MLR). Several statistical indicators, including coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE), bias error (Bias) and root mean square error (RMSE) were utilized to evaluate the performance of the applied models. The results showed that the developed machine learning models were able to account for the most variance in the carbon fluxes at both daily and hourly time scales in the three stands and they consistently and substantially outperformed the MLR model for both daily and hourly carbon flux estimates. It was demonstrated that the ANFIS and ANN models provided similar estimates in the testing period with an approximate value of R2 = 0.93, NSE = 0.91, Bias = 0.11 g C m−2 day−1 and RMSE = 1.04 g C m−2 day−1 for daily gross primary productivity, 0.94, 0.82, 0.24 g C m−2 day−1 and 0.72 g C m−2 day−1 for daily ecosystem respiration, and 0.79, 0.75, 0.14 g C m−2 day−1 and 0.89 g C m−2 day−1 for daily net ecosystem exchange, and slightly outperformed the GRNN and SVM models. In practical terms, however, the newly developed models (ANFIS and GRNN) are more robust and flexible, and have less parameters needed for selection and optimization in comparison with traditional ANN and SVM models. Consequently, they can be used as valuable tools to estimate forest carbon fluxes and fill the missing carbon flux data during the long-term EC measurements.


Author(s):  
Mehmet Fatih Bayramoglu ◽  
Cagatay Basarir

Investing in developed markets offers investors the opportunity to diversify internationally by investing in foreign firms. In other words, it provides the possibility of reducing systematic risk. For this reason, investors are very interested in developed markets. However, developed are more efficient than emerging markets, so the risk and return can be low in these markets. For this reason, developed market investors often use machine learning techniques to increase their gains while reducing their risks. In this chapter, artificial neural networks which is one of the machine learning techniques have been tested to improve internationally diversified portfolio performance. Also, the results of ANNs were compared with the performances of traditional portfolios and the benchmark portfolio. The portfolios are derived from the data of 16 foreign companies quoted on NYSE by ANNs, and they are invested for 30 trading days. According to the results, portfolio derived by ANNs gained 10.30% return, while traditional portfolios gained 5.98% return.


Author(s):  
Juan Gómez-Sanchis ◽  
Emilio Soria-Olivas ◽  
Marcelino Martinez-Sober ◽  
Jose Blasco ◽  
Juan Guerrero ◽  
...  

This work presents a new approach for one of the main problems in the analysis of atmospheric phenomena, the prediction of atmospheric concentrations of different elements. The proposed methodology is more efficient than other classical approaches and is used in this work to predict tropospheric ozone concentration. The relevance of this problem stems from the fact that excessive ozone concentrations may cause several problems related to public health. Previous research by the authors of this work has shown that the classical approach to this problem (linear models) does not achieve satisfactory results in tropospheric ozone concentration prediction. The authors’ approach is based on Machine Learning (ML) techniques, which include algorithms related to neural networks, fuzzy systems and advanced statistical techniques for data processing. In this work, the authors focus on one of the main ML techniques, namely, neural networks. These models demonstrate their suitability for this problem both in terms of prediction accuracy and information extraction.


Author(s):  
Hesham M. Al-Ammal

Detection of anomalies in a given data set is a vital step in several applications in cybersecurity; including intrusion detection, fraud, and social network analysis. Many of these techniques detect anomalies by examining graph-based data. Analyzing graphs makes it possible to capture relationships, communities, as well as anomalies. The advantage of using graphs is that many real-life situations can be easily modeled by a graph that captures their structure and inter-dependencies. Although anomaly detection in graphs dates back to the 1990s, recent advances in research utilized machine learning methods for anomaly detection over graphs. This chapter will concentrate on static graphs (both labeled and unlabeled), and the chapter summarizes some of these recent studies in machine learning for anomaly detection in graphs. This includes methods such as support vector machines, neural networks, generative neural networks, and deep learning methods. The chapter will reflect the success and challenges of using these methods in the context of graph-based anomaly detection.


Sign in / Sign up

Export Citation Format

Share Document