scholarly journals Neural Network Approaches to Reconstruct Phytoplankton Time-Series in the Global Ocean

2020 ◽  
Vol 12 (24) ◽  
pp. 4156
Author(s):  
Elodie Martinez ◽  
Anouar Brini ◽  
Thomas Gorgues ◽  
Lucas Drumetz ◽  
Joana Roussillon ◽  
...  

Phytoplankton plays a key role in the carbon cycle and supports the oceanic food web. While its seasonal and interannual cycles are rather well characterized owing to the modern satellite ocean color era, its longer time variability remains largely unknown due to the short time-period covered by observations on a global scale. With the aim of reconstructing this longer-term phytoplankton variability, a support vector regression (SVR) approach was recently considered to derive surface Chlorophyll-a concentration (Chl, a proxy of phytoplankton biomass) from physical oceanic model outputs and atmospheric reanalysis. However, those early efforts relied on one particular algorithm, putting aside the question of whether different algorithms may have specific behaviors. Here, we show that this approach can also be applied on satellite observations and can even be further improved by testing performances of different machine learning algorithms, the SVR and a neural network with dense layers (a multi-layer perceptron, MLP). The MLP outperforms the SVR to capture satellite Chl (correlation of 0.6 vs. 0.17 on a global scale, respectively) along with its seasonal and interannual variability, despite an underestimated amplitude. Among deep learning algorithms, neural network such as MLP models appear to be promising tools to investigate phytoplankton long-term time-series.

2018 ◽  
Vol 8 (8) ◽  
pp. 1280 ◽  
Author(s):  
Yong Kim ◽  
Youngdoo Son ◽  
Wonjoon Kim ◽  
Byungki Jin ◽  
Myung Yun

Sitting on a chair in an awkward posture or sitting for a long period of time is a risk factor for musculoskeletal disorders. A postural habit that has been formed cannot be changed easily. It is important to form a proper postural habit from childhood as the lumbar disease during childhood caused by their improper posture is most likely to recur. Thus, there is a need for a monitoring system that classifies children’s sitting postures. The purpose of this paper is to develop a system for classifying sitting postures for children using machine learning algorithms. The convolutional neural network (CNN) algorithm was used in addition to the conventional algorithms: Naïve Bayes classifier (NB), decision tree (DT), neural network (NN), multinomial logistic regression (MLR), and support vector machine (SVM). To collect data for classifying sitting postures, a sensing cushion was developed by mounting a pressure sensor mat (8 × 8) inside children’s chair seat cushion. Ten children participated, and sensor data was collected by taking a static posture for the five prescribed postures. The accuracy of CNN was found to be the highest as compared with those of the other algorithms. It is expected that the comprehensive posture monitoring system would be established through future research on enhancing the classification algorithm and providing an effective feedback system.


2020 ◽  
Vol 12 (11) ◽  
pp. 1838 ◽  
Author(s):  
Zhao Zhang ◽  
Paulo Flores ◽  
C. Igathinathane ◽  
Dayakar L. Naik ◽  
Ravi Kiran ◽  
...  

The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258788
Author(s):  
Sarra Ayouni ◽  
Fahima Hajjej ◽  
Mohamed Maddeh ◽  
Shaha Al-Otaibi

The educational research is increasingly emphasizing the potential of student engagement and its impact on performance, retention and persistence. This construct has emerged as an important paradigm in the higher education field for many decades. However, evaluating and predicting the student’s engagement level in an online environment remains a challenge. The purpose of this study is to suggest an intelligent predictive system that predicts the student’s engagement level and then provides the students with feedback to enhance their motivation and dedication. Three categories of students are defined depending on their engagement level (Not Engaged, Passively Engaged, and Actively Engaged). We applied three different machine-learning algorithms, namely Decision Tree, Support Vector Machine and Artificial Neural Network, to students’ activities recorded in Learning Management System reports. The results demonstrate that machine learning algorithms could predict the student’s engagement level. In addition, according to the performance metrics of the different algorithms, the Artificial Neural Network has a greater accuracy rate (85%) compared to the Support Vector Machine (80%) and Decision Tree (75%) classification techniques. Based on these results, the intelligent predictive system sends feedback to the students and alerts the instructor once a student’s engagement level decreases. The instructor can identify the students’ difficulties during the course and motivate them through e-mail reminders, course messages, or scheduling an online meeting.


2021 ◽  
Author(s):  
Nuno Moniz ◽  
Susana Barbosa

<p>The Dansgaard-Oeschger (DO) events are one of the most striking examples of abrupt climate change in the Earth's history, representing temperature oscillations of about 8 to 16 degrees Celsius within a few decades. DO events have been studied extensively in paleoclimatic records, particularly in ice core proxies. Examples include the Greenland NGRIP record of oxygen isotopic composition.<br>This work addresses the anticipation of DO events using machine learning algorithms. We consider the NGRIP time series from 20 to 60 kyr b2k with the GICC05 timescale and 20-year temporal resolution. Forecasting horizons range from 0 (nowcasting) to 400 years. We adopt three different machine learning algorithms (random forests, support vector machines, and logistic regression) in training windows of 5 kyr. We perform validation on subsequent test windows of 5 kyr, based on timestamps of previous DO events' classification in Greenland by Rasmussen et al. (2014). We perform experiments with both sliding and growing windows.<br>Results show that predictions on sliding windows are better overall, indicating that modelling is affected by non-stationary characteristics of the time series. The three algorithms' predictive performance is similar, with a slightly better performance of random forest models for shorter forecast horizons. The prediction models' predictive capability decreases as the forecasting horizon grows more extensive but remains reasonable up to 120 years. Model performance deprecation is mostly related to imprecision in accurately determining the start and end time of events and identifying some periods as DO events when such is not valid.</p>


2020 ◽  
Author(s):  
Atika Qazi ◽  
Khulla Naseer ◽  
Javaria Qazi ◽  
Muhammad Abo

UNSTRUCTURED Well-timed forecast of infectious outbreaks using time-series data can help in proper planning of public health measures. If the forecasts are generated from machine learning algorithms, they can be used to manage resources where most needed. Here we present a support vector machine (SVM) model using epidemiological data provided by Johns Hopkins University Centre for Systems Science and Engineering (JHU CCSE), world health organization (WHO), Center for Disease Control and Prevention (CDC) to predict upcoming data before official declaration by WHO. Our study conducted on the time series data available from 22nd January till 10th March 2020 reveals that COVID-19 was spreading at an alarming rate and progressing towards a pandemic. If machine learning algorithms are used to predict the dynamics of an infectious outbreak future strategies can help in better management. Besides exploratory data analysis (EDA) highlights the importance of quarantine measures taken at the onset of this endemic by China and world leadership in containing the initial COVID-19 transmission. Nevertheless, when quarantine measures were relaxed due to extreme scrutiny a sharp upsurge was seen in COVID-19 transmission. The initial insight that confirmed COVID-19 cases are increasing as these got the highest number of effects for our selected dataset from 22nd January-10th March 2020 i.e. 126,344 (64%). The recovered cases are 68289 (34%) and the death rate is around 2%. The model presented here is flexible and can include uncertainty about outbreak dynamics and can be a significant tool for combating future outbreaks.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Zeynep Hilal Kilimci ◽  
Aykut Güven ◽  
Mitat Uysal ◽  
Selim Akyokus

Nowadays, smart devices as a part of daily life collect data about their users with the help of sensors placed on them. Sensor data are usually physical data but mobile applications collect more than physical data like device usage habits and personal interests. Collected data are usually classified as personal, but they contain valuable information about their users when it is analyzed and interpreted. One of the main purposes of personal data analysis is to make predictions about users. Collected data can be divided into two major categories: physical and behavioral data. Behavioral data are also named as neurophysical data. Physical and neurophysical parameters are collected as a part of this study. Physical data contains measurements of the users like heartbeats, sleep quality, energy, movement/mobility parameters. Neurophysical data contain keystroke patterns like typing speed and typing errors. Users’ emotional/mood statuses are also investigated by asking daily questions. Six questions are asked to the users daily in order to determine the mood of them. These questions are emotion-attached questions, and depending on the answers, users’ emotional states are graded. Our aim is to show that there is a connection between users’ physical/neurophysical parameters and mood/emotional conditions. To prove our hypothesis, we collect and measure physical and neurophysical parameters of 15 users for 1 year. The novelty of this work to the literature is the usage of both combinations of physical and neurophysical parameters. Another novelty is that the emotion classification task is performed by both conventional machine learning algorithms and deep learning models. For this purpose, Feedforward Neural Network (FFNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) neural network are employed as deep learning methodologies. Multinomial Naïve Bayes (MNB), Support Vector Regression (SVR), Decision Tree (DT), Random Forest (RF), and Decision Integration Strategy (DIS) are evaluated as conventional machine learning algorithms. To the best of our knowledge, this is the very first attempt to analyze the neurophysical conditions of the users by evaluating deep learning models for mood analysis and enriching physical characteristics with neurophysical parameters. Experiment results demonstrate that the utilization of deep learning methodologies and the combination of both physical and neurophysical parameters enhances the classification success of the system to interpret the mood of the users. A wide range of comparative and extensive experiments shows that the proposed model exhibits noteworthy results compared to the state-of-art studies.


Nowadays, machine learning and deep learning algorithms, are considered as new technologies increasingly used in the biomedical field. Machine learning is a branch of Artificial Intelligence that aims to automatically find patterns in existing data. A new Machine Learning subfield, the deep learning theory, has emerged. It deals with object recognition in images. In this paper, our goal is DNA Microarrays’analysis with these algorithms to classify two genes’ types. The first class represents cell cycle regulated genes and the second is non cell cycle regulated ones. In the current state of the art, the researchers are processing the numerical data associated to gene evolution to achieve this classification. Here, we propose a new and different approach, based on the microarrays images’ treatment. To classify images, we use three machine learning algorithms which are: Support Vector Machine, KNearest Neighbors and Random Forest Classifier. We also use the Convolutional Neural Network and the fully connected neural network algorithms. Experiments demonstrate that our approaches outperform the state of art by a margin of 14.73 per cent by using machine learning algorithms and a margin of 22.39 per cent by using deep learning models. Our models accomplish real time test accuracy of ~ 92.39 % at classifying using CNNand 94.73% using machine learning algorithms.


2021 ◽  
Vol 11 (12) ◽  
pp. 5703
Author(s):  
Yifan Si ◽  
Dawei Gong ◽  
Yang Guo ◽  
Xinhua Zhu ◽  
Qiangsheng Huang ◽  
...  

DeepLab v3+ neural network shows excellent performance in semantic segmentation. In this paper, we proposed a segmentation framework based on DeepLab v3+ neural network and applied it to the problem of hyperspectral imagery classification (HSIC). The dimensionality reduction of the hyperspectral image is performed using principal component analysis (PCA). DeepLab v3+ is used to extract spatial features, and those are fused with spectral features. A support vector machine (SVM) classifier is used for fitting and classification. Experimental results show that the framework proposed in this paper outperforms most traditional machine learning algorithms and deep-learning algorithms in hyperspectral imagery classification tasks.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Hang Chen ◽  
Sulaiman Khan ◽  
Bo Kou ◽  
Shah Nazir ◽  
Wei Liu ◽  
...  

Generally, the emergence of Internet of Things enabled applications inspired the world during the last few years, providing state-of-the-art and novel-based solutions for different problems. This evolutionary field is mainly lead by wireless sensor network, radio frequency identification, and smart mobile technologies. Among others, the IoT plays a key role in the form of smart medical devices and wearables, with the ability to collect varied and longitudinal patient-generated health data, and at the same time also offering preliminary diagnosis options. In terms of efforts made for helping the patients using IoT-based solutions, experts exploit capabilities of the machine learning algorithms to provide efficient solutions in hemorrhage diagnosis. To reduce the death rates and propose accurate treatment, this paper presents a smart IoT-based application using machine learning algorithms for the human brain hemorrhage diagnosis. Based on the computerized tomography scan images for intracranial dataset, the support vector machine and feedforward neural network have been applied for the classification purposes. Overall, classification results of 80.67% and 86.7% are calculated for the support vector machine and feedforward neural network, respectively. It is concluded from the resultant analysis that the feedforward neural network outperforms in classifying intracranial images. The output generated from the classification tool gives information about the type of brain hemorrhage that ultimately helps in validating expert’s diagnosis and is treated as a learning tool for trainee radiologists to minimize the errors in the available systems.


Sign in / Sign up

Export Citation Format

Share Document