scholarly journals Detection of Depression Symptoms Using Chatbots Based on Machine Learning

Author(s):  
Vitor Bastos ◽  
André Felipe Monteiro

Nowadays depression is a relevant issue due the high level of stressobserved even in students and young people. Moreover, the detectionof the depression symptons is a complex task, since eachperson has different behaviors and reactions in these scenarios. Thiswork address the detection of depression symptoms using chatbotsbased on machine learning algorithms. The use of chatbots enablesa smooth approach for shy and introspective people, whose donot feel comfortable for talking to parents, psychologists or medicalprofessionals in general. To this end, an App for smart-phoneis proposed in order to perform a talk with a person, and verifyif some depression symptoms are observed based using machinelearning algorithms. The initial results show that the proposedmodel has a good accuracy on simulated scenarios, where basictalks are performed by the chatbot.

Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


2020 ◽  
Vol 190 (3) ◽  
pp. 342-351
Author(s):  
Munir S Pathan ◽  
S M Pradhan ◽  
T Palani Selvam

Abstract In the present study, machine learning (ML) methods for the identification of abnormal glow curves (GC) of CaSO4:Dy-based thermoluminescence dosimeters in individual monitoring are presented. The classifier algorithms, random forest (RF), artificial neural network (ANN) and support vector machine (SVM) are employed for identifying not only the abnormal glow curve but also the type of abnormality. For the first time, the simplest and computationally efficient algorithm based on RF is presented for GC classifications. About 4000 GCs are used for the training and validation of ML algorithms. The performance of all algorithms is compared by using various parameters. Results show a fairly good accuracy of 99.05% for the classification of GCs by RF algorithm. Whereas 96.7% and 96.1% accuracy is achieved using ANN and SVM, respectively. The RF-based classifier is recommended for GC classification as well as in assisting the fault determination of the TLD reader system.


Diagnostics ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 29 ◽  
Author(s):  
Lea Pehrson ◽  
Michael Nielsen ◽  
Carsten Ammitzbøl Lauridsen

The aim of this study was to provide an overview of the literature available on machine learning (ML) algorithms applied to the Lung Image Database Consortium Image Collection (LIDC-IDRI) database as a tool for the optimization of detecting lung nodules in thoracic CT scans. This systematic review was compiled according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Only original research articles concerning algorithms applied to the LIDC-IDRI database were included. The initial search yielded 1972 publications after removing duplicates, and 41 of these articles were included in this study. The articles were divided into two subcategories describing their overall architecture. The majority of feature-based algorithms achieved an accuracy >90% compared to the deep learning (DL) algorithms that achieved an accuracy in the range of 82.2%–97.6%. In conclusion, ML and DL algorithms are able to detect lung nodules with a high level of accuracy, sensitivity, and specificity using ML, when applied to an annotated archive of CT scans of the lung. However, there is no consensus on the method applied to determine the efficiency of ML algorithms.


2021 ◽  
Author(s):  
Ram Sunder Kalyanraman ◽  
Xiaoli Chen ◽  
Po-Yen Wu ◽  
Kevin Constable ◽  
Amit Govil ◽  
...  

Abstract Ultrasonic and sonic logs are increasingly used to evaluate the quality of cement placement in the annulus behind the pipe and its potential to perform as a barrier. Wireline logs are carried out in widely varying conditions and attempt to evaluate a variety of cement formulations in the annulus. The annulus geometry is complex due to pipe standoff and often affects the behavior (properties) of the cement. The transformation of ultrasonic data to meaningful cement evaluation is also a complex task and requires expertise to ensure the processing is correctly carried out as well interpreted correctly. Cement formulations can vary from heavy weight cement to ultralight foamed cements. The ultrasonic log-based evaluation, using legacy practices, works well for cements that are well behaved and well bonded to casing. In such cases, a lightweight cement and heavyweight cement, when bonded, can be easily discriminated from gas or liquid (mud) through simple quantitative thresholds resulting in a Solid(S) - Liquid(L) - Gas(G) map. However, ultralight and foamed cements may overlap with mud in quantitative terms. Cements may debond from casing with a gap (that is either wet or dry), resulting in a very complex log response that may not be amenable to simple threshold-based discrimination of S-L-G. Cement sheath evaluation and the inference of the cement sheath to serve as a barrier is complex. It is therefore imperative that adequate processes mitigate errors in processing and interpretation and bring in reliability and consistency. Processing inconsistencies are caused when we are unable to correctly characterize the borehole properties either due to suboptimal measurements or assumptions of the borehole environment. Experts can and do recognize inconsistencies in processing and can advise appropriate resolution to ensure correct processing. The same decision-making criteria that experts follow can be implemented through autonomous workflows. The ability for software to autocorrect is not only possible but significantly enables the reliability of the product for wellsite decisions. In complex situations of debonded cements and ultralight cements, we may need to approach the interpretation from a data behavior-based approach, which can be explained by physics and modeling or through observations in the field by experts. This leads a novel seven-class annulus characterization [5S-L-G] which we expect will bring improved clarity on the annulus behavior. We explain the rationale for such an approach by providing a catalog of log response for the seven classes. In addition, we introduce the ability to carry out such analysis autonomously though machine learning. Such machine learning algorithms are best carried out after ensuring the data is correctly processed. We demonstrate the capability through a few field examples. The ability to emulate an "expert" through software can lead to an ability to autonomously correct processing inconsistencies prior to an autonomous interpretation, thereby significantly enhancing the reliability and consistency of cement evaluation, ruling out issues related to subjectivity, training, and competency.


2021 ◽  
Author(s):  
Mihai Niculita

<p>Machine learning algorithms are increasingly used in geosciences for the detection of susceptibility modeling of certain landforms or processes. The increased availability of high-resolution data and the increase of available machine learning algorithms opens up the possibility of creating datasets for the training of models for automatic detection of specific landforms. In this study, we tested the usage of LiDAR DEMs for creating a dataset of labeled images representing shallow single event landslides in order to use them for the detection of other events. The R stat implementation of the keras high-level neural networks API was used to build and test the proposed approach. A 5m LiDAR DEM was cut in 25 by 25 pixels tiles, and the tiles that overlayed shallow single event landslides were labeled accordingly, while the tiles that did not contain landslides were randomly selected to be labeled as non-landslides. The binary classification approach was tested with 255 grey levels elevation images and 255 grey levels shading images, the shading approach giving better results. The presented study case shows the possibility of using machine learning in the landslide detection on high-resolution DEMs.</p>


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 205
Author(s):  
Hamoud Younes ◽  
Ali Ibrahim ◽  
Mostafa Rizk ◽  
Maurizio Valle

Approximate Computing Techniques (ACT) are promising solutions towards the achievement of reduced energy, time latency and hardware size for embedded implementations of machine learning algorithms. In this paper, we present the first FPGA implementation of an approximate tensorial Support Vector Machine (SVM) classifier with algorithmic level ACTs using High-Level Synthesis (HLS). A touch modality classification framework was adopted to validate the effectiveness of the proposed implementation. When compared to exact implementation presented in the state-of-the-art, the proposed implementation achieves a reduction in power consumption by up to 49% with a speedup of 3.2×. Moreover, the hardware resources are reduced by 40% while consuming 82% less energy in classifying an input touch with an accuracy loss less than 5%.


Medical data classification is an important and complex task. Due to the nature of data, the data is in different forms like text, numeric, images and sometimes combination of all. The goal of this paper is to provide a high-level introduction into practical machine learning for purposes of medical data classification. In this paper we use CNN-Auto encoder to extract data from the medical repository and made the classification of heterogeneous medical data. Here Auto encoder uses to get the prime features and CNN is there to extract detailed features. Combination of these two mechanisms are more suitable for medical data classification. Hybrid AE-CNN (auto encoder based Convolutional neural network). Here the performance of proposed mechanism with respect to baseline methods will be assessed. The performance results showed that the proposed mechanism performed well.


Author(s):  
Aadar Pandita

: Heart disease has been one of the ruling causes for death for quite some time now. About 31% of all deaths every year in the world take place as a result of cardiovascular diseases [1]. A majority of the patients remain uninformed of their symptoms until quite late while others find it difficult to minimise the effects of risk factors that cause heart diseases. Machine Learning Algorithms have been quite efficacious in producing results with a high level of correctness thereby preventing the onset of heart diseases in many patients and reducing the impact in the ones that are already affected by such diseases. It has helped medical researchers and doctors all over the world in recognising patterns in the patients resulting in early detections of heart diseases.


Author(s):  
Neha Sharma ◽  
Harsh Vardhan Bhandari ◽  
Narendra Singh Yadav ◽  
Harsh Vardhan Jonathan Shroff

Nowadays it is imperative to maintain a high level of security to ensure secure communication of information between various institutions and organizations. With the growing use of internet over the years, the number of attacks over the internet have escalated. A powerful Intrusion Detection System (IDS) is required to ensure the security of a network. The aim of an IDS is to monitor the active processes in a network and to detect any deviation from the normal behavior of the system. When it comes to machine learning, optimization is the process of obtaining the maximum accuracy from a model. Optimization is vital for IDSs in order to predict a wide variety of attacks with utmost accuracy. The effectiveness of an IDS is dependent on its ability to correctly predict and classify any anomaly faced by a computer system. During the last two decades, KDD_CUP_99 has been the most widely used data set to evaluate the performance of such systems. In this study, we will apply different Machine Learning techniques on this data set and see which technique yields the best results.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Adriana Tomic ◽  
Ivan Tomic ◽  
Cornelia L. Dekker ◽  
Holden T. Maecker ◽  
Mark M. Davis

Abstract Machine learning has the potential to identify novel biological factors underlying successful antibody responses to influenza vaccines. The first attempts have revealed a high level of complexity in establishing influenza immunity, and many different cellular and molecular components are involved. Of note is that the previously identified correlates of protection fail to account for the majority of individual responses across different age groups and influenza seasons. Challenges remain from the small sample sizes in most studies and from often limited data sets, such as transcriptomic data. Here we report the creation of a unified database, FluPRINT, to enable large-scale studies exploring the cellular and molecular underpinnings of successful antibody responses to influenza vaccines. Over 3,000 parameters were considered, including serological responses to influenza strains, serum cytokines, cell phenotypes, and cytokine stimulations. FluPRINT, facilitates the application of machine learning algorithms for data mining. The data are publicly available and represent a resource to uncover new markers and mechanisms that are important for influenza vaccine immunogenicity.


Sign in / Sign up

Export Citation Format

Share Document