scholarly journals Real-Time Exercise Feedback through a Convolutional Neural Network: A Machine Learning-Based Motion-Detecting Mobile Exercise Coaching Application

2022 ◽  
Vol 63 (Suppl) ◽  
pp. S34
Author(s):  
Jinyoung Park ◽  
Seok Young Chung ◽  
Jung Hyun Park
Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 170
Author(s):  
Muhammad Wasimuddin ◽  
Khaled Elleithy ◽  
Abdelshakour Abuzneid ◽  
Miad Faezipour ◽  
Omar Abuzaghleh

Cardiovascular diseases have been reported to be the leading cause of mortality across the globe. Among such diseases, Myocardial Infarction (MI), also known as “heart attack”, is of main interest among researchers, as its early diagnosis can prevent life threatening cardiac conditions and potentially save human lives. Analyzing the Electrocardiogram (ECG) can provide valuable diagnostic information to detect different types of cardiac arrhythmia. Real-time ECG monitoring systems with advanced machine learning methods provide information about the health status in real-time and have improved user’s experience. However, advanced machine learning methods have put a burden on portable and wearable devices due to their high computing requirements. We present an improved, less complex Convolutional Neural Network (CNN)-based classifier model that identifies multiple arrhythmia types using the two-dimensional image of the ECG wave in real-time. The proposed model is presented as a three-layer ECG signal analysis model that can potentially be adopted in real-time portable and wearable monitoring devices. We have designed, implemented, and simulated the proposed CNN network using Matlab. We also present the hardware implementation of the proposed method to validate its adaptability in real-time wearable systems. The European ST-T database recorded with single lead L3 is used to validate the CNN classifier and achieved an accuracy of 99.23%, outperforming most existing solutions.


2020 ◽  
Vol 16 (4) ◽  
Author(s):  
Rohit Verma ◽  
Saumil Maheshwari ◽  
Anupam Shukla

AbstractObjectivesThe appropriate care for patients admitted in Intensive care units (ICUs) is becoming increasingly prominent, thus recognizing the use of machine learning models. The real-time prediction of mortality of patients admitted in ICU has the potential for providing the physician with the interpretable results. With the growing crisis including soaring cost, unsafe care, misdirected care, fragmented care, chronic diseases and evolution of epidemic diseases in the domain of healthcare demands the application of automated and real-time data processing for assuring the improved quality of life. The intensive care units (ICUs) are responsible for generating a wealth of useful data in the form of Electronic Health Record (EHR). This data allows for the development of a prediction tool with perfect knowledge backing.MethodWe aimed to build the mortality prediction model on 2012 Physionet Challenge mortality prediction database of 4,000 patients admitted in ICU. The challenges in the dataset, such as high dimensionality, imbalanced distribution and missing values, were tackled with analytical methods and tools via feature engineering and new variable construction. The objective of the research is to utilize the relations among the clinical variables and construct new variables which would establish the effectiveness of 1-Dimensional Convolutional Neural Network (1-D CNN) with constructed features.ResultsIts performance with the traditional machine learning algorithms like XGBoost classifier, Light Gradient Boosting Machine (LGBM) classifier, Support Vector Machine (SVM), Decision Tree (DT), K-Neighbours Classifier (K-NN), and Random Forest Classifier (RF) and recurrent models like Long Short-Term Memory (LSTM) and LSTM-attention is compared for Area Under Curve (AUC). The investigation reveals the best AUC of 0.848 using 1-D CNN model.ConclusionThe relationship between the various features were recognized. Also, constructed new features using existing ones. Multiple models were tested and compared on different metrics.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4916
Author(s):  
Ali Usman Gondal ◽  
Muhammad Imran Sadiq ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
...  

Urbanization is a big concern for both developed and developing countries in recent years. People shift themselves and their families to urban areas for the sake of better education and a modern lifestyle. Due to rapid urbanization, cities are facing huge challenges, one of which is waste management, as the volume of waste is directly proportional to the people living in the city. The municipalities and the city administrations use the traditional wastage classification techniques which are manual, very slow, inefficient and costly. Therefore, automatic waste classification and management is essential for the cities that are being urbanized for the better recycling of waste. Better recycling of waste gives the opportunity to reduce the amount of waste sent to landfills by reducing the need to collect new raw material. In this paper, the idea of a real-time smart waste classification model is presented that uses a hybrid approach to classify waste into various classes. Two machine learning models, a multilayer perceptron and multilayer convolutional neural network (ML-CNN), are implemented. The multilayer perceptron is used to provide binary classification, i.e., metal or non-metal waste, and the CNN identifies the class of non-metal waste. A camera is placed in front of the waste conveyor belt, which takes a picture of the waste and classifies it. Upon successful classification, an automatic hand hammer is used to push the waste into the assigned labeled bucket. Experiments were carried out in a real-time environment with image segmentation. The training, testing, and validation accuracy of the purposed model was 0.99% under different training batches with different input features.


Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Sign in / Sign up

Export Citation Format

Share Document