scholarly journals Pervasive Lying Posture Tracking

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5953 ◽  
Author(s):  
Parastoo Alinia ◽  
Ali Samadani ◽  
Mladen Milosevic ◽  
Hassan Ghasemzadeh ◽  
Saman Parvaneh

Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding how to design efficient in-bed lying posture tracking systems. These gaps can be articulated through several research questions, as follows. First, can we design a single-sensor, pervasive, and inexpensive system that can accurately detect lying postures? Second, what computational models are most effective in the accurate detection of lying postures? Finally, what physical configuration of the sensor system is most effective for lying posture tracking? To answer these important research questions, in this article we propose a comprehensive approach for designing a sensor system that uses a single accelerometer along with machine learning algorithms for in-bed lying posture classification. We design two categories of machine learning algorithms based on deep learning and traditional classification with handcrafted features to detect lying postures. We also investigate what wearing sites are the most effective in the accurate detection of lying postures. We extensively evaluate the performance of the proposed algorithms on nine different body locations and four human lying postures using two datasets. Our results show that a system with a single accelerometer can be used with either deep learning or traditional classifiers to accurately detect lying postures. The best models in our approach achieve an F1 score that ranges from 95.2% to 97.8% with a coefficient of variation from 0.03 to 0.05. The results also identify the thighs and chest as the most salient body sites for lying posture tracking. Our findings in this article suggest that, because accelerometers are ubiquitous and inexpensive sensors, they can be a viable source of information for pervasive monitoring of in-bed postures.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2021 ◽  
Author(s):  
Celestine Udim Monday ◽  
Toyin Olabisi Odutola

Abstract Natural Gas production and transportation are at risk of Gas hydrate plugging especially when in offshore environments where temperature is low and pressure is high. These plugs can eventually block the pipeline, increase back pressure, stop production and ultimately rupture gas pipelines. This study seeks to develops machine learning models after a kinetic inhibitor to predict the gas hydrate formation and pressure changes within the natural gas flow line. Green hydrate inhibitor A, B and C were obtained as plant extracts and applied in low dosages (0.01 wt.% to 0.1 wt.%) on a 12meter skid-mounted hydrate closed flow loop. From the data generated, the optimal dosages of inhibitor A, B and C were observed to be 0.02 wt.%, 0.06 wt.% and 0.1 wt.% respectively. The data associated with these optimal dosages were fed to a set of supervised machine learning algorithms (Extreme gradient boost, Gradient boost regressor and Linear regressor) and a deep learning algorithm (Artificial Neural Network). The output results from the set of supervised learning algorithms and Deep Learning algorithms were compared in terms of their accuracies in predicting the hydrate formation and the pressure within the natural gas flow line. All models had accuracies greater than 90%. This result show that the application Machine learning to solving flow assurance problems is viable. The results show that it is viable to apply machine learning algorithms to solve flow assurance problems, analyzing data and getting reports which can improve accuracy and speed of on-site decision making process.


2021 ◽  
Author(s):  
Lukman Ismael ◽  
Pejman Rasti ◽  
Florian Bernard ◽  
Philippe Menei ◽  
Aram Ter Minassian ◽  
...  

BACKGROUND The functional MRI (fMRI) is an essential tool for the presurgical planning of brain tumor removal, allowing the identification of functional brain networks in order to preserve the patient’s neurological functions. One fMRI technique used to identify the functional brain network is the resting-state-fMRI (rsfMRI). However, this technique is not routinely used because of the necessity to have a expert reviewer to identify manually each functional networks. OBJECTIVE We aimed to automatize the detection of brain functional networks in rsfMRI data using deep learning and machine learning algorithms METHODS We used the rsfMRI data of 82 healthy patients to test the diagnostic performance of our proposed end-to-end deep learning model to the reference functional networks identified manually by 2 expert reviewers. RESULTS Experiment results show the best performance of 86% correct recognition rate obtained from the proposed deep learning architecture which shows its superiority over other machine learning algorithms that were equally tested for this classification task. CONCLUSIONS The proposed end-to-end deep learning model was the most performant machine learning algorithm. The use of this model to automatize the functional networks detection in rsfMRI may allow to broaden the use of the rsfMRI, allowing the presurgical identification of these networks and thus help to preserve the patient’s neurological status. CLINICALTRIAL Comité de protection des personnes Ouest II, decision reference CPP 2012-25)


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Muhammad Waqar ◽  
Hassan Dawood ◽  
Hussain Dawood ◽  
Nadeem Majeed ◽  
Ameen Banjar ◽  
...  

Cardiac disease treatments are often being subjected to the acquisition and analysis of vast quantity of digital cardiac data. These data can be utilized for various beneficial purposes. These data’s utilization becomes more important when we are dealing with critical diseases like a heart attack where patient life is often at stake. Machine learning and deep learning are two famous techniques that are helping in making the raw data useful. Some of the biggest problems that arise from the usage of the aforementioned techniques are massive resource utilization, extensive data preprocessing, need for features engineering, and ensuring reliability in classification results. The proposed research work presents a cost-effective solution to predict heart attack with high accuracy and reliability. It uses a UCI dataset to predict the heart attack via various machine learning algorithms without the involvement of any feature engineering. Moreover, the given dataset has an unequal distribution of positive and negative classes which can reduce performance. The proposed work uses a synthetic minority oversampling technique (SMOTE) to handle given imbalance data. The proposed system discarded the need of feature engineering for the classification of the given dataset. This led to an efficient solution as feature engineering often proves to be a costly process. The results show that among all machine learning algorithms, SMOTE-based artificial neural network when tuned properly outperformed all other models and many existing systems. The high reliability of the proposed system ensures that it can be effectively used in the prediction of the heart attack.


Author(s):  
Robert Ancuceanu ◽  
Marilena Viorica Hovanet ◽  
Adriana Iuliana Anghel ◽  
Florentina Furtunescu ◽  
Monica Neagu ◽  
...  

Drug induced liver injury (DILI) remains one of the challenges in the safety profile of both authorized drugs and candidate drugs and predicting hepatotoxicity from the chemical structure of a substance remains a challenge worth pursuing, being also coherent with the current tendency for replacing non-clinical tests with in vitro or in silico alternatives. In 2016 a group of researchers from FDA published an improved annotated list of drugs with respect to their DILI risk, constituting &ldquo;the largest reference drug list ranked by the risk for developing drug-induced liver injury in humans&rdquo;, DILIrank. This paper is one of the few attempting to predict liver toxicity using the DILIrank dataset. Molecular descriptors were computed with the Dragon 7.0 software, and a variety of feature selection and machine learning algorithms were implemented in the R computing environment. Nested (double) cross-validation was used to externally validate the models selected. A number of 78 models with reasonable performance have been selected and stacked through several approaches, including the building of multiple meta-models. The performance of the stacked models was slightly superior to other models published. The models were applied in a virtual screening exercise on over 100,000 compounds from the ZINC database and about 20% of them were predicted to be non-hepatotoxic.


2021 ◽  
Vol 11 (4) ◽  
pp. 251-264
Author(s):  
Radhika Bhagwat ◽  
Yogesh Dandawate

Plant diseases cause major yield and economic losses. To detect plant disease at early stages, selecting appropriate techniques is imperative as it affects the cost, diagnosis time, and accuracy. This research gives a comprehensive review of various plant disease detection methods based on the images used and processing algorithms applied. It systematically analyzes various traditional machine learning and deep learning algorithms used for processing visible and spectral range images, and comparatively evaluates the work done in literature in terms of datasets used, various image processing techniques employed, models utilized, and efficiency achieved. The study discusses the benefits and restrictions of each method along with the challenges to be addressed for rapid and accurate plant disease detection. Results show that for plant disease detection, deep learning outperforms traditional machine learning algorithms while visible range images are more widely used compared to spectral images.


Author(s):  
Soundariya R.S. ◽  
◽  
Tharsanee R.M. ◽  
Vishnupriya B ◽  
Ashwathi R ◽  
...  

Corona virus disease (Covid - 19) has started to promptly spread worldwide from April 2020 till date, leading to massive death and loss of lives of people across various countries. In accordance to the advices of WHO, presently the diagnosis is implemented by Reverse Transcription Polymerase Chain Reaction (RT- PCR) testing, that incurs four to eight hours’ time to process test samples and adds 48 hours to categorize whether the samples are positive or negative. It is obvious that laboratory tests are time consuming and hence a speedy and prompt diagnosis of the disease is extremely needed. This can be attained through several Artificial Intelligence methodologies for prior diagnosis and tracing of corona diagnosis. Those methodologies are summarized into three categories: (i) Predicting the pandemic spread using mathematical models (ii) Empirical analysis using machine learning models to forecast the global corona transition by considering susceptible, infected and recovered rate. (iii) Utilizing deep learning architectures for corona diagnosis using the input data in the form of X-ray images and CT scan images. When X-ray and CT scan images are taken into account, supplementary data like medical signs, patient history and laboratory test results can also be considered while training the learning model and to advance the testing efficacy. Thus the proposed investigation summaries the several mathematical models, machine learning algorithms and deep learning frameworks that can be executed on the datasets to forecast the traces of COVID-19 and detect the risk factors of coronavirus.


2021 ◽  
pp. 477-485
Author(s):  
Vu Thanh Nguyen ◽  
Mai Viet Tiep ◽  
Phu Phuoc Huy ◽  
Nguyen Thai Nho ◽  
Luong The Dung ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document