scholarly journals Characterizing snow surface properties using airborne hyperspectral imagery for autonomous winter mobility

2021 ◽  
Author(s):  
Taylor Hodgdon ◽  
Anthony Fuentes ◽  
Brian Quinn ◽  
Bruce Elder ◽  
Sally Shoop

With changing conditions in northern climates it is crucial for the United States to have assured mobility in these high-latitude regions. Winter terrain conditions adversely affect vehicle mobility and, as such, they must be accurately characterized to ensure mission success. Previous studies have attempted to remotely characterize snow properties using varied sensors. However, these studies have primarily used satellite-based products that provide coarse spatial and temporal resolution, which is unsuitable for autonomous mobility. Our work employs the use of an Unmanned Aeriel Vehicle (UAV) mounted hyperspectral camera in tandem with machine learning frameworks to predict snow surface properties at finer scales. Several machine learning models were trained using hyperspectral imagery in tandem with in-situ snow measurements. The results indicate that random forest and k-nearest neighbors models had the lowest Mean Absolute Error for all surface snow properties. A pearson correlation matrix showed that density, grain size, and moisture content all had a significant positive correlation to one another. Mechanically, density and grain size had a slightly positive correlation to compressive strength, while moisture had a much weaker negative correlation. This work provides preliminary insight into the efficacy of using hyperspectral imagery for characterizing snow properties for autonomous vehicle mobility.

2020 ◽  
Vol 4 (1) ◽  
pp. 513-524
Author(s):  
Asabe E. Garba ◽  
George T. Grossberg ◽  
Kimberly R. Enard ◽  
Fabian J. Jano ◽  
Emma N. Roberts ◽  
...  

Background: Alzheimer’s disease (AD) is the 6th leading cause of death in the United States and has no cure or progression prevention. The Cognitive Reserve (CR) theory poses that constant brain activity earlier in life later helps to deter pathological changes in the brain, delaying the onset of disease symptoms. Objective: To determine the reliability and validity of the Cognitive Reserve Index questionnaire (CRIq) in AD patients. Methods: Primary data collection was done using the CRIq to quantify CR in 90 participants. Correlations and multivariable linear regressions were used to assess reliability and validity. Results: Reliability was tested in 34 participants. A Pearson correlation coefficient of 0.89 (p < 0.001) indicated a strong positive correlation. Validity was tested in 33 participants. A Pearson correlation coefficient of 0.30 (p = 0.10) indicated an insignificant weak positive correlation. Conclusion: The CRIq was found reliable. Gaining a better understanding of how CR tools can be used in various cognitive populations will help with the establishment of a research tool that is universally accepted as a true CR measure.


2020 ◽  
Author(s):  
Paul Donchenko ◽  
Joshua King ◽  
Richard Kelly

Abstract. Recent studies have challenged the assumption that Ku-band radar used by the CryoSat-2 altimeter fully penetrates the dry snow cover of Arctic sea ice. There is also uncertainty around the proper technique for handling retracker threshold selection in the Threshold First-Maxima Retracker (TFMRA) method which estimates the ice surface elevation from the radar echo waveform. The purpose of this study was to evaluate the accuracy and penetration of the TFMRA retracking method applied to the Airborne Synthetic Aperture Radar and Interferometric Radar Altimeter System (ASIRAS), an airborne simulator of the CryoSat-2, to investigate the effect of surface characteristics and improve accuracy. The ice surface elevation estimate from ASIRAS was evaluated by comparing to the snow surface measured by aggregating laser altimetry observations from the Airborne Laser Scanner (ALS), and the ice surface measured by subtracting ground observations of snow depth from the snow surface. The perceived penetration of the ice surface estimate was found to increase with the retracker threshold and was correlated with the value of surface properties. The slope of the relationship between penetration and threshold was greater for a deformed ice surface, a rough snow surface, a deeper snow cover, an absence of salinity and a larger snow grain size. As a result, the ideal retracked threshold, one that would achieve 100 % penetration, varies depending on properties of the surface being observed. Under conditions such deep snow or a large grain size, the retracked elevation sr was found in some cases to not penetrate fully the snowpack. This would cause an overestimation of the sea ice freeboard and as a consequence, the sea ice thickness. Results suggest that using a single threshold with the TFMRA retracking method will not yield a reliable estimate of the snow-ice interface when observed over an area with diverse surface properties. However, there may be potential to improve the retracking method by incorporating knowledge of the sensed surface physical characteristics. This study shows that remotely sensed surface properties, such as the ice deformity or snow surface roughness, can be combined with the waveform shape to select an ideal retracker for individual returns with an additional offset to account for the incomplete penetration of Ku-band over appropriate surface characteristics.


Author(s):  
Navid Asadizanjani ◽  
Sachin Gattigowda ◽  
Mark Tehranipoor ◽  
Domenic Forte ◽  
Nathan Dunn

Abstract Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.


2020 ◽  
Author(s):  
Carson Lam ◽  
Jacob Calvert ◽  
Gina Barnes ◽  
Emily Pellegrini ◽  
Anna Lynn-Palevsky ◽  
...  

BACKGROUND In the wake of COVID-19, the United States has developed a three stage plan to outline the parameters to determine when states may reopen businesses and ease travel restrictions. The guidelines also identify subpopulations of Americans that should continue to stay at home due to being at high risk for severe disease should they contract COVID-19. These guidelines were based on population level demographics, rather than individual-level risk factors. As such, they may misidentify individuals at high risk for severe illness and who should therefore not return to work until vaccination or widespread serological testing is available. OBJECTIVE This study evaluated a machine learning algorithm for the prediction of serious illness due to COVID-19 using inpatient data collected from electronic health records. METHODS The algorithm was trained to identify patients for whom a diagnosis of COVID-19 was likely to result in hospitalization, and compared against four U.S policy-based criteria: age over 65, having a serious underlying health condition, age over 65 or having a serious underlying health condition, and age over 65 and having a serious underlying health condition. RESULTS This algorithm identified 80% of patients at risk for hospitalization due to COVID-19, versus at most 62% that are identified by government guidelines. The algorithm also achieved a high specificity of 95%, outperforming government guidelines. CONCLUSIONS This algorithm may help to enable a broad reopening of the American economy while ensuring that patients at high risk for serious disease remain home until vaccination and testing become available.


Author(s):  
Timnit Gebru

This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


2021 ◽  
Vol 14 (5) ◽  
pp. 472
Author(s):  
Tyler C. Beck ◽  
Kyle R. Beck ◽  
Jordan Morningstar ◽  
Menny M. Benjamin ◽  
Russell A. Norris

Roughly 2.8% of annual hospitalizations are a result of adverse drug interactions in the United States, representing more than 245,000 hospitalizations. Drug–drug interactions commonly arise from major cytochrome P450 (CYP) inhibition. Various approaches are routinely employed in order to reduce the incidence of adverse interactions, such as altering drug dosing schemes and/or minimizing the number of drugs prescribed; however, often, a reduction in the number of medications cannot be achieved without impacting therapeutic outcomes. Nearly 80% of drugs fail in development due to pharmacokinetic issues, outlining the importance of examining cytochrome interactions during preclinical drug design. In this review, we examined the physiochemical and structural properties of small molecule inhibitors of CYPs 3A4, 2D6, 2C19, 2C9, and 1A2. Although CYP inhibitors tend to have distinct physiochemical properties and structural features, these descriptors alone are insufficient to predict major cytochrome inhibition probability and affinity. Machine learning based in silico approaches may be employed as a more robust and accurate way of predicting CYP inhibition. These various approaches are highlighted in the review.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Brandon Craig ◽  
Xiaolin Wang ◽  
Jeanne Sandella ◽  
Tsung-Hsun Tsai ◽  
David Kuo ◽  
...  

Abstract Context The Comprehensive Osteopathic Medical Licensing Examination of the United States of America (COMLEX-USA) is a three level examination used as a pathway to licensure for students in osteopathic medical education programs. COMLEX-USA Level 2 includes a written assessment of Fundamental Clinical Sciences for Osteopathic Medical Practice (Level 2-Cognitive Evaluation [L2-CE]) delivered in a computer based format and separate performance evaluation (Level 2-Performance Evaluation [L2-PE]) administered through live encounters with standardized patients. L2-PE was designed to augment L2-CE. It is expected that the two examinations measure related yet distinct constructs. Objectives To explore the concurrent validity of L2-CE with L2-PE. Methods First attempt test scores were obtained from the National Board of Osteopathic Medical Examiners database for 6,639 candidates who took L2-CE between June 2019 and May 2020 and matched to the students’ L2-PE scores. The sample represented all colleges of osteopathic medicine and 97.5% of candidates who took L2-CE during the complete 2019–2020 test cycle. We calculated disattenuated correlations between the total score for L2-CE, the L2-CE scores for the seven competency domains (CD1 through CD7), and the L2-PE scores for the Humanistic Domain (HM) and Biomedical/Biomechanical Domain (BM). All scores were on continuous scales. Results Pearson correlations ranged from 0.10 to 0.88 and were all statically significant (p<0.01). L2-CE total score was most strongly correlated with CD2 (0.88) and CD3 (0.85). Pearson correlations between the L2-CE competency domain subscores ranged from 0.17 to 0.70, and correlations which included either HM or BM ranged from 0.10 to 0.34 with the strongest of those correlations being between BM and L2-CE total score (0.34) as well as between HM and BM (0.28).The largest increase between corresponding Pearson and disattenuated correlations was for pairs of scores with lower reliabilities such as CD5 and CD6, which had a Pearson correlation of 0.17 and a disattenuated correlation of 0.68. The smallest increase in correlations was observed in pairs of scores with larger reliabilities such as L2-CE total score and HM, which had a Pearson correlation of 0.23 and a disattenuated correlation of 0.28. The reliability of L2-CE was 0.87, 0.81 for HM, and 0.73 for BM. The reliabilities for the L2-CE competency domain scores ranged from 0.22 to 0.74. The small to moderate correlations between the L2-CE total score and the two L2-PE support the expectation that these examinations measure related but distinct constructs. The correlations between L2-PE and L2-CE competency domain subscores reflect the distribution of items defined by the L2-PE blueprint, providing evidence that the examinations are performing as designed. Conclusions This study provides evidence supporting the validity of the blueprints for constructing COMLEX-USA Levels 2-CE and 2-PE examinations in concert with the purpose and nature of the examinations.


Agronomy ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 35
Author(s):  
Xiaodong Huang ◽  
Beth Ziniti ◽  
Michael H. Cosh ◽  
Michele Reba ◽  
Jinfei Wang ◽  
...  

Soil moisture is a key indicator to assess cropland drought and irrigation status as well as forecast production. Compared with the optical data which are obscured by the crop canopy cover, the Synthetic Aperture Radar (SAR) is an efficient tool to detect the surface soil moisture under the vegetation cover due to its strong penetration capability. This paper studies the soil moisture retrieval using the L-band polarimetric Phased Array-type L-band SAR 2 (PALSAR-2) data acquired over the study region in Arkansas in the United States. Both two-component model-based decomposition (SAR data alone) and machine learning (SAR + optical indices) methods are tested and compared in this paper. Validation using independent ground measurement shows that the both methods achieved a Root Mean Square Error (RMSE) of less than 10 (vol.%), while the machine learning methods outperform the model-based decomposition, achieving an RMSE of 7.70 (vol.%) and R2 of 0.60.


2020 ◽  
Vol 41 (S1) ◽  
pp. s521-s522
Author(s):  
Debarka Sengupta ◽  
Vaibhav Singh ◽  
Seema Singh ◽  
Dinesh Tewari ◽  
Mudit Kapoor ◽  
...  

Background: The rising trend of antibiotic resistance imposes a heavy burden on healthcare both clinically and economically (US$55 billion), with 23,000 estimated annual deaths in the United States as well as increased length of stay and morbidity. Machine-learning–based methods have, of late, been used for leveraging patient’s clinical history and demographic information to predict antimicrobial resistance. We developed a machine-learning model ensemble that maximizes the accuracy of such a drug-sensitivity versus resistivity classification system compared to the existing best-practice methods. Methods: We first performed a comprehensive analysis of the association between infecting bacterial species and patient factors, including patient demographics, comorbidities, and certain healthcare-specific features. We leveraged the predictable nature of these complex associations to infer patient-specific antibiotic sensitivities. Various base-learners, including k-NN (k-nearest neighbors) and gradient boosting machine (GBM), were used to train an ensemble model for confident prediction of antimicrobial susceptibilities. Base learner selection and model performance evaluation was performed carefully using a variety of standard metrics, namely accuracy, precision, recall, F1 score, and Cohen &kappa;. Results: For validating the performance on MIMIC-III database harboring deidentified clinical data of 53,423 distinct patient admissions between 2001 and 2012, in the intensive care units (ICUs) of the Beth Israel Deaconess Medical Center in Boston, Massachusetts. From ~11,000 positive cultures, we used 4 major specimen types namely urine, sputum, blood, and pus swab for evaluation of the model performance. Figure 1 shows the receiver operating characteristic (ROC) curves obtained for bloodstream infection cases upon model building and prediction on 70:30 split of the data. We received area under the curve (AUC) values of 0.88, 0.92, 0.92, and 0.94 for urine, sputum, blood, and pus swab samples, respectively. Figure 2 shows the comparative performance of our proposed method as well as some off-the-shelf classification algorithms. Conclusions: Highly accurate, patient-specific predictive antibiogram (PSPA) data can aid clinicians significantly in antibiotic recommendation in ICU, thereby accelerating patient recovery and curbing antimicrobial resistance.Funding: This study was supported by Circle of Life Healthcare Pvt. Ltd.Disclosures: None


2021 ◽  
Vol 7 (9) ◽  
pp. 4614-4625
Author(s):  
Carolin A. Rickert ◽  
Elif N. Hayta ◽  
Daniel M. Selle ◽  
Ioannis Kouroudis ◽  
Milan Harth ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document