scholarly journals Deep learning classification and regression models for temperature values on a simulated fibre specklegram sensor

2021 ◽  
Vol 2139 (1) ◽  
pp. 012001
Author(s):  
J D Arango ◽  
V H Aristizabal ◽  
J F Carrasquilla ◽  
J A Gomez ◽  
J C Quijano ◽  
...  

Abstract Fiber optic specklegram sensors use the modal interference pattern (or specklegram) to determine the magnitude of a disturbance. The most used interrogation methods for these sensors have focused on point measurements of intensity or correlations between specklegrams, with limitations in sensitivity and useful measurement range. To investigate alternative methods of specklegram interrogation that improve the performance of the fiber specklegram sensors, we implemented and compared two deep learning models: a classification model and a regression model. To test and train the models, we use physical-optical models and simulations by the finite element method to create a database of specklegram images, covering the temperature range between 0 °C and 100 °C. With the prediction tests, we showed that both models can cover the entire proposed temperature range and achieve an accuracy of 99.5%, for the classification model, and a mean absolute error of 2.3 °C, in the regression model. We believe that these results show that the strategies implemented can improve the metrological capabilities of this type of sensor.

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7637
Author(s):  
Javier Rocher ◽  
Lorena Parra ◽  
Jose M. Jimenez ◽  
Jaime Lloret ◽  
Daniel A. Basterrechea

In irrigation ponds, the excess of nutrients can cause eutrophication, a massive growth of microscopic algae. It might cause different problems in the irrigation infrastructure and should be monitored. In this paper, we present a low-cost sensor based on optical absorption in order to determine the concentration of algae in irrigation ponds. The sensor is composed of 5 LEDs with different wavelengths and light-dependent resistances as photoreceptors. Data are gathered for the calibration of the prototype, including two turbidity sources, sediment and algae, including pure samples and mixed samples. Samples were measured at a different concentration from 15 mg/L to 4000 mg/L. Multiple regression models and artificial neural networks, with a training and validation phase, are compared as two alternative methods to classify the tested samples. Our results indicate that using multiple regression models, it is possible to estimate the concentration of alga with an average absolute error of 32.0 mg/L and an average relative error of 11.0%. On the other hand, it is possible to classify up to 100% of the samples in the validation phase with the artificial neural network. Thus, a novel prototype capable of distinguishing turbidity sources and two classification methodologies, which can be adapted to different node features, are proposed for the operation of the developed prototype.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4402
Author(s):  
Pekka Siirtola ◽  
Juha Röning

In this article, regression and classification models are compared for stress detection. Both personal and user-independent models are experimented. The article is based on publicly open dataset called AffectiveROAD, which contains data gathered using Empatica E4 sensor and unlike most of the other stress detection datasets, it contains continuous target variables. The used classification model is Random Forest and the regression model is Bagged tree based ensemble. Based on experiments, regression models outperform classification models, when classifying observations as stressed or not-stressed. The best user-independent results are obtained using a combination of blood volume pulse and skin temperature features, and using these the average balanced accuracy was 74.1% with classification model and 82.3% using regression model. In addition, regression models can be used to estimate the level of the stress. Moreover, the results based on models trained using personal data are not encouraging showing that biosignals have a lot of variation not only between the study subjects but also between the session gathered from the same person. On the other hand, it is shown that with subject-wise feature selection for user-independent model, it is possible to improve recognition models more than by using personal training data to build personal models. In fact, it is shown that with subject-wise feature selection, the average detection rate can be improved as much as 4%-units, and it is especially useful to reduce the variance in the recognition rates between the study subjects.


2021 ◽  
Author(s):  
Gholamreza Hesamian ◽  
Mohammad Ghasem Akbari

Abstract A novel functional regression model was introduced, where the predictor was a curve linked to a scalar fuzzy response variable. An absolute error-based penalized method with SCAD loss function was proposed to evaluate the unknown components of the model. For this purpose, a concept of fuzzy-valued function was developed and discussed. Then, a fuzzy large number notion was proposed to estimate the fuzzyvalued function. Some common goodness-of-fit criteria were also used to examine the performance of the proposed method. Efficiency of the proposed method was then evaluated through two numerical examples, including a simulation study and an applied example in the scope of watershed management. The proposed method was also compared with several common fuzzy regression models in cases where the functional data was converted to scalar ones.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 788-803
Author(s):  
Ahmed Mahdi Abdulkadium

Robotics mainly concern with the movement of robot with improvement obstacle avoidance, this issue is handed. It contains of a Microcontroller to process the data, and Ultrasonic sensors to detect the obstacles on its path. Artificial intelligence is used to predict the presence of obstacle in the path. In this research random forest algorithm is used and it is improved by RFHTMC algorithm. Deep learning mainly compromises of reducing the mean absolute error of forecasting. Problem with random forest is time complexity, as it involves formation of many classification trees. The proposed algorithm reduces the set of rules which is used for classification model, to improve time complexity. Performance analysis shows an significant improvement in results as compare to other deep learning algorithm as well as random forest. Forecasting accuracy shows 8% improvement as compare to random forest with 26% reduced operation time.


2021 ◽  
Author(s):  
Takuma Shibahara ◽  
Chisa Wada ◽  
Yasuho Yamashita ◽  
Kazuhiro Fujita ◽  
Masamichi Sato ◽  
...  

Breast cancer is the most frequently found cancer in women and the one most often subjected to genetic analysis. Nonetheless, it has been causing the largest number of women's cancer-related deaths. PAM50, the intrinsic subtype assay for breast cancer, is beneficial for diagnosis and stratified treatment but does not explain each subtype's mechanism. Nowadays, deep learning can predict the subtypes from genetic information more accurately than conventional statistical methods. However, the previous studies did not directly use deep learning to examine which genes associate with the subtypes. Ours is the first study on a deep-learning approach to reveal the mechanisms embedded in the PAM50-classified subtypes. We developed an explainable deep learning model called a point-wise linear model, which uses a meta-learning approach to generate a custom-made logistic regression model for each sample. Logistic regression is familiar to physicians and medical informatics researchers, and we can use it to analyze which genes are important for subtype prediction. The custom-made logistic regression models generated by the point-wise linear model for each subtype used the specific genes selected in other subtypes compared to the conventional logistic regression model: the overlap ratio is less than twenty percent. And analyzing the point-wise linear model's inner state, we found that the point-wise linear model used genes relevant to the cell cycle-related pathways. The results of this study suggest the potential of our explainable deep learning to play a vital role in cancer treatment.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6654
Author(s):  
Stefano Villa ◽  
Claudio Sassanelli

Buildings are among the main protagonists of the world’s growing energy consumption, employing up to 45%. Wide efforts have been directed to improve energy saving and reduce environmental impacts to attempt to address the objectives fixed by policymakers in the past years. Meanwhile, new approaches using Machine Learning regression models surged in the modeling and simulation research context. This research develops and proposes an innovative data-driven black box predictive model for estimating in a dynamic way the interior temperature of a building. Therefore, the rationale behind the approach has been chosen based on two steps. First, an investigation of the extant literature on the methods to be considered for tests has been conducted, shrinking the field of investigation to non-recursive multi-step approaches. Second, the results obtained on a pilot case using various Machine Learning regression models in the multi-step approach have been assessed, leading to the choice of the Support Vector Regression model. The prediction mean absolute error on the pilot case is 0.1 ± 0.2 °C when the offset from the prediction instant is 15 min and grows slowly for further future instants, up to 0.3 ± 0.8 °C for a prediction horizon of 8 h. In the end, the advantages and limitations of the new data-driven multi-step approach based on the Support Vector Regression model are provided. Relying only on data related to external weather, interior temperature and calendar, the proposed approach is promising to be applicable to any type of building without needing as input specific geometrical/physical characteristics.


Author(s):  
Mohamed Abdelmoneim Elshafey ◽  
Tarek Elsaid Ghoniemy

Among the cancer diseases, breast cancer is considered one of the most prevalent threats requiring early detection for a higher recovery rate. Meanwhile, the manual evaluation of malignant tissue regions in histopathology images is a critical and challenging task. Nowadays, deep learning becomes a leading technology for automatic tumor feature extraction and classification as malignant or benign. This paper presents a proposed hybrid deep learning-based approach, for reliable breast cancer detection, in three consecutive stages: 1) fine-tuning the pre-trained Xception-based classification model, 2) merging the extracted features with the predictions of a two-layer stacked LSTM-based regression model, and finally, 3) applying the support vector machine, in the classification phase, to the merged features. For the three stages of the proposed approach, training and testing phases are performed on the BreakHis dataset with nine adopted different augmentation techniques to ensure generalization of the proposed approach. A comprehensive performance evaluation of the proposed approach, with diverse metrics, shows that employing the LSTM-based regression model improves accuracy and precision metrics of the fine-tuned Xception-based model by 10.65% and 11.6%, respectively. Additionally, as a classifier, implementing the support vector machine further boosts the model by 3.43% and 5.22% for both metrics, respectively. Experimental results exploit the efficiency of the proposed approach with outstanding reliability in comparison with the recent state-of-the-art approaches.


2012 ◽  
Vol 40 (2) ◽  
pp. 60-82
Author(s):  
Ken Ishihara ◽  
Takehiro Noda ◽  
Hiroyuki Sakurai

ABSTRACT In contrast to the finite element method (FEM), which is widely used in the tire industry nowadays, some alternative methods have been proposed by academic communities over the past decade or so. The meshfree method is one of those new methodologies. Originally intended to remove the burden of creating the mesh that is inherent in FEM, the meshfree method relies on the point data rather than the mesh, which makes it much easier to discretize the geometry. In addition to those modeling issues, it has been found that the meshfree method has several advantages over FEM in handling geometrical nonlinearities, continuities, and so forth. In accordance with those emerging possibilities, the authors have been conducting research on the matter. This article describes the results of the authors' preliminary research on the applicability of the meshfree method to tire analyses, which include the theoretical outline, the strategy of tire modeling, numerical results, comparisons with results of FEM, and conclusions.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


Sign in / Sign up

Export Citation Format

Share Document