scholarly journals Material Identification Using a Microwave Sensor Array and Machine Learning

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 288 ◽  
Author(s):  
Luke Harrsion ◽  
Maryam Ravan ◽  
Dhara Tandel ◽  
Kunyi Zhang ◽  
Tanvi Patel ◽  
...  

In this paper, a novel methodology is proposed for material identification. It is based on the use of a microwave sensor array with the elements of the array resonating at various frequencies within a wide range and applying machine learning algorithms on the collected data. Unlike the previous microwave sensing systems which are mainly based on a single resonating sensor, the proposed methodology allows for material characterization over a wide frequency range which, in turn, improves the accuracy of the material identification procedure. The performance of the proposed methodology is tested via the use of easily available materials such as woods, cardboards, and plastics. However, the proposed methodology can be extended to other applications such as industrial liquid identification and composite material identification, among others.

The field of biosciences have advanced to a larger extent and have generated large amounts of information from Electronic Health Records. This have given rise to the acute need of knowledge generation from this enormous amount of data. Data mining methods and machine learning play a major role in this aspect of biosciences. Chronic Kidney Disease(CKD) is a condition in which the kidneys are damaged and cannot filter blood as they always do. A family history of kidney diseases or failure, high blood pressure, type 2 diabetes may lead to CKD. This is a lasting damage to the kidney and chances of getting worser by time is high. The very common complications that results due to a kidney failure are heart diseases, anemia, bone diseases, high potasium and calcium. The worst case situation leads to complete kidney failure and necessitates kidney transplant to live. An early detection of CKD can improve the quality of life to a greater extent. This calls for good prediction algorithm to predict CKD at an earlier stage . Literature shows a wide range of machine learning algorithms employed for the prediction of CKD. This paper uses data preprocessing,data transformation and various classifiers to predict CKD and also proposes best Prediction framework for CKD. The results of the framework show promising results of better prediction at an early stage of CKD


2021 ◽  
Author(s):  
Zhu Shen ◽  
Wenfei Du ◽  
Cecelia Perkins ◽  
Lenn Fechter ◽  
Vanita Natu ◽  
...  

Predicting disease natural history remains a particularly challenging endeavor in chronic degenerative disorders and cancer, thus limiting early detection, risk stratification, and preventive interventions. Here, profiling the spectrum of chronic myeloproliferative neoplasms (MPNs), as a model, we identify the blood platelet transcriptome as a generalizable strategy for highly sensitive progression biomarkers that also enable prediction via machine learning algorithms. Using RNA sequencing (RNA seq), we derive disease relevant gene expression and alternative splicing in purified platelets from 120 peripheral blood samples constituting two independently collected and mutually validating patient cohorts of the three MPN subtypes: essential thrombocythemia, ET (n=24), polycythemia vera, PV (n=33), and primary or post ET/PV secondary myelofibrosis, MF (n=42), as well as healthy donors (n=21). The MPN platelet transcriptome discriminates each clinical phenotype and reveals an incremental molecular reprogramming that is independent of patient driver mutation status or therapy. Leveraging this dataset, in particular the progressive expression gradient noted across MPN, we develop a machine learning model (Lasso-penalized regression) predictive of the advanced subtype MF at high accuracy (AUC-ROC of 0.95-0.96) with validation under two conditions: i) temporal, with training on the first cohort (n=71) and independent testing on the second (n=49) and ii) 10 fold cross validation on the entire dataset. Lasso-derived signatures offer a robust core set of < 10 MPN progression markers. Mechanistic insights from our data highlight impaired protein homeostasis as a prominent driver of MPN evolution, with persistent integrated stress response. We also identify JAK inhibitor-specific signatures and other interferon, proliferation, and proteostasis associated markers as putative targets for MPN-directed therapy. Our platelet transcriptome snapshot of chronic MPNs establishes a methodological foundation for deciphering disease risk stratification and progression beyond genetic data alone, thus presenting a promising avenue toward potential utility in a wide range of age-related disorders.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Thomas Kurmann ◽  
Siqing Yu ◽  
Pablo Márquez-Neila ◽  
Andreas Ebneter ◽  
Martin Zinkernagel ◽  
...  

Abstract In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.


2020 ◽  
Vol 12 (24) ◽  
pp. 4070
Author(s):  
Florian Ellsäßer ◽  
Alexander Röll ◽  
Joyson Ahongshangbam ◽  
Pierre-André Waite ◽  
Hendrayanto ◽  
...  

Plant transpiration is a key element in the hydrological cycle. Widely used methods for its assessment comprise sap flux techniques for whole-plant transpiration and porometry for leaf stomatal conductance. Recently emerging approaches based on surface temperatures and a wide range of machine learning techniques offer new possibilities to quantify transpiration. The focus of this study was to predict sap flux and leaf stomatal conductance based on drone-recorded and meteorological data and compare these predictions with in-situ measured transpiration. To build the prediction models, we applied classical statistical approaches and machine learning algorithms. The field work was conducted in an oil palm agroforest in lowland Sumatra. Random forest predictions yielded the highest congruence with measured sap flux (r2 = 0.87 for trees and r2 = 0.58 for palms) and confidence intervals for intercept and slope of a Passing-Bablok regression suggest interchangeability of the methods. Differences in model performance are indicated when predicting different tree species. Predictions for stomatal conductance were less congruent for all prediction methods, likely due to spatial and temporal offsets of the measurements. Overall, the applied drone and modelling scheme predicts whole-plant transpiration with high accuracy. We conclude that there is large potential in machine learning approaches for ecological applications such as predicting transpiration.


2020 ◽  
Vol 20 (11) ◽  
pp. 6020-6028 ◽  
Author(s):  
Md Ashfaque Hossain Khan ◽  
Brian Thomson ◽  
Ratan Debnath ◽  
Abhishek Motayed ◽  
Mulpuri V. Rao

Biosensors ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 193
Author(s):  
Alanna V. Zubler ◽  
Jeong-Yeol Yoon

Plant stresses have been monitored using the imaging or spectrometry of plant leaves in the visible (red-green-blue or RGB), near-infrared (NIR), infrared (IR), and ultraviolet (UV) wavebands, often augmented by fluorescence imaging or fluorescence spectrometry. Imaging at multiple specific wavelengths (multi-spectral imaging) or across a wide range of wavelengths (hyperspectral imaging) can provide exceptional information on plant stress and subsequent diseases. Digital cameras, thermal cameras, and optical filters have become available at a low cost in recent years, while hyperspectral cameras have become increasingly more compact and portable. Furthermore, smartphone cameras have dramatically improved in quality, making them a viable option for rapid, on-site stress detection. Due to these developments in imaging technology, plant stresses can be monitored more easily using handheld and field-deployable methods. Recent advances in machine learning algorithms have allowed for images and spectra to be analyzed and classified in a fully automated and reproducible manner, without the need for complicated image or spectrum analysis methods. This review will highlight recent advances in portable (including smartphone-based) detection methods for biotic and abiotic stresses, discuss data processing and machine learning techniques that can produce results for stress identification and classification, and suggest future directions towards the successful translation of these methods into practical use.


2020 ◽  
Author(s):  
NaKyeong Kim ◽  
Suho Bak ◽  
Minji Jeong ◽  
Hongjoo Yoon

&lt;p&gt;&lt;span&gt;A sea fog is a fog caused by the cooling of the air near the ocean-atmosphere boundary layer when the warm sea surface air moves to a cold sea level. Sea fog affects a variety of aspects, including maritime and coastal transportation, military activities and fishing activities. In particular, it is important to detect sea fog as they can lead to ship accidents due to reduced visibility. Due to the wide range of sea fog events and the lack of constant occurrence, it is generally detected through satellite remote sensing. Because sea fog travels in a short period of time, it uses geostationary satellites with higher time resolution than polar satellites to detect fog. A method for detecting fog by using the difference between 11 &amp;#956;m channel and 3.7 &amp;#956;m channel was widely used when detecting fog by satellite remote sensing, but this is difficult to distinguish between lower clouds and fog. Traditional algorithms are difficult to find accurate thresholds for fog and cloud. However, machine learning algorithms can be used as a useful tool to determine this. In this study, based on geostationary satellite imaging data, a comparative analysis of sea fog detection accuracy was conducted through various methods of machine learning, such as Random Forest, Multi-Layer Perceptron, and Convolutional Neural Networks.&lt;/span&gt;&lt;/p&gt;


Author(s):  
Emir Kocer ◽  
Tsz Wai Ko ◽  
Jörg Behler

In the past two decades, machine learning potentials (MLPs) have reached a level of maturity that now enables applications to large-scale atomistic simulations of a wide range of systems in chemistry, physics, and materials science. Different machine learning algorithms have been used with great success in the construction of these MLPs. In this review, we discuss an important group of MLPs relying on artificial neural networks to establish a mapping from the atomic structure to the potential energy. In spite of this common feature, there are important conceptual differences among MLPs, which concern the dimensionality of the systems, the inclusion of long-range electrostatic interactions, global phenomena like nonlocal charge transfer, and the type of descriptor used to represent the atomic structure, which can be either predefined or learnable. A concise overview is given along with a discussion of the open challenges in the field. Expected final online publication date for the Annual Review of Physical Chemistry, Volume 73 is April 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


2021 ◽  
Author(s):  
Zhen Chen ◽  
Pei Zhao ◽  
Chen Li ◽  
Fuyi Li ◽  
Dongxu Xiang ◽  
...  

Abstract Sequence-based analysis and prediction are fundamental bioinformatic tasks that facilitate understanding of the sequence(-structure)-function paradigm for DNAs, RNAs and proteins. Rapid accumulation of sequences requires equally pervasive development of new predictive models, which depends on the availability of effective tools that support these efforts. We introduce iLearnPlus, the first machine-learning platform with graphical- and web-based interfaces for the construction of machine-learning pipelines for analysis and predictions using nucleic acid and protein sequences. iLearnPlus provides a comprehensive set of algorithms and automates sequence-based feature extraction and analysis, construction and deployment of models, assessment of predictive performance, statistical analysis, and data visualization; all without programming. iLearnPlus includes a wide range of feature sets which encode information from the input sequences and over twenty machine-learning algorithms that cover several deep-learning approaches, outnumbering the current solutions by a wide margin. Our solution caters to experienced bioinformaticians, given the broad range of options, and biologists with no programming background, given the point-and-click interface and easy-to-follow design process. We showcase iLearnPlus with two case studies concerning prediction of long noncoding RNAs (lncRNAs) from RNA transcripts and prediction of crotonylation sites in protein chains. iLearnPlus is an open-source platform available at https://github.com/Superzchen/iLearnPlus/ with the webserver at http://ilearnplus.erc.monash.edu/.


Sign in / Sign up

Export Citation Format

Share Document