scholarly journals Assisting scalable diagnosis automatically via CT images in the combat against COVID-19

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bohan Liu ◽  
Pan Liu ◽  
Lutao Dai ◽  
Yanlin Yang ◽  
Peng Xie ◽  
...  

AbstractThe pandemic of Coronavirus Disease 2019 (COVID-19) is causing enormous loss of life globally. Prompt case identification is critical. The reference method is the real-time reverse transcription PCR (RT-PCR) assay, whose limitations may curb its prompt large-scale application. COVID-19 manifests with chest computed tomography (CT) abnormalities, some even before the onset of symptoms. We tested the hypothesis that the application of deep learning (DL) to 3D CT images could help identify COVID-19 infections. Using data from 920 COVID-19 and 1,073 non-COVID-19 pneumonia patients, we developed a modified DenseNet-264 model, COVIDNet, to classify CT images to either class. When tested on an independent set of 233 COVID-19 and 289 non-COVID-19 pneumonia patients, COVIDNet achieved an accuracy rate of 94.3% and an area under the curve of 0.98. As of March 23, 2020, the COVIDNet system had been used 11,966 times with a sensitivity of 91.12% and a specificity of 88.50% in six hospitals with PCR confirmation. Application of DL to CT images may improve both efficiency and capacity of case detection and long-term surveillance.

2020 ◽  
Author(s):  
Bohan Liu ◽  
Pan Liu ◽  
Lutao Dai ◽  
Yanlin Yang ◽  
Peng Xie ◽  
...  

The pandemic of coronavirus Disease 2019 (COVID-19) caused enormous loss of life globally. 1-3 Case identification is critical. The reference method is using real-time reverse transcription PCR (rRT-PCR) assays, with limitations that may curb its prompt large-scale application. COVID-19 manifests with chest computed tomography (CT) abnormalities, some even before the onset of symptoms. We tested the hypothesis that application of deep learning (DL) to the 3D CT images could help identify COVID-19 infections. Using the data from 920 COVID-19 and 1,073 non-COVID-19 pneumonia patients, we developed a modified DenseNet-264 model, COVIDNet, to classify CT images to either class. When tested on an independent set of 233 COVID-19 and 289 non-COVID-19 patients. COVIDNet achieved an accuracy rate of 94.3% and an area under the curve (AUC) of 0.98. Application of DL to CT images may improve both the efficiency and capacity of case detection and long-term surveillance.


2019 ◽  
Vol 11 (4) ◽  
pp. 1655-1674 ◽  
Author(s):  
Gionata Ghiggi ◽  
Vincent Humphrey ◽  
Sonia I. Seneviratne ◽  
Lukas Gudmundsson

Abstract. Freshwater resources are of high societal relevance, and understanding their past variability is vital to water management in the context of ongoing climate change. This study introduces a global gridded monthly reconstruction of runoff covering the period from 1902 to 2014. In situ streamflow observations are used to train a machine learning algorithm that predicts monthly runoff rates based on antecedent precipitation and temperature from an atmospheric reanalysis. The accuracy of this reconstruction is assessed with cross-validation and compared with an independent set of discharge observations for large river basins. The presented dataset agrees on average better with the streamflow observations than an ensemble of 13 state-of-the art global hydrological model runoff simulations. We estimate a global long-term mean runoff of 38 452 km3 yr−1 in agreement with previous assessments. The temporal coverage of the reconstruction offers an unprecedented view on large-scale features of runoff variability in regions with limited data coverage, making it an ideal candidate for large-scale hydro-climatic process studies, water resource assessments, and evaluating and refining existing hydrological models. The paper closes with example applications fostering the understanding of global freshwater dynamics, interannual variability, drought propagation and the response of runoff to atmospheric teleconnections. The GRUN dataset is available at https://doi.org/10.6084/m9.figshare.9228176 (Ghiggi et al., 2019).


2017 ◽  
Vol 40 (3) ◽  
pp. 280-310
Author(s):  
Marinella Caruso ◽  
Josh Brown

Abstract This article discusses the validity of the bonus for languages other than English (known as the Language Bonus) established in Australia to boost participation in language education. In subjecting this incentive plan to empirical investigation, we not only address a gap in the literature, but also continue the discussion on how to ensure that the efforts made by governments, schools, education agencies and teachers to support language study in schooling can have long-term success. Using data from a large-scale investigation, we consider the significance of the Language Bonus in influencing students’ decisions to study a language at school and at university. While this paper has a local focus – an English-speaking country in which language study is not compulsory – it engages with questions from the broader agenda of providing incentives for learning languages. It will be relevant especially for language policy in English speaking countries.


Author(s):  
D.R. Stevens ◽  
G. Young

The collection and use of data from large scale farming operations provided significant insights into drivers of sheep performance. These drivers included minimum two-tooth liveweight at tupping, ewe condition and pasture cover at lambing and the importance of weaning weight on whole farm performance. Using this data to demonstrate the influence of management decisions resulted in an increase in average lamb liveweight gain between birth and weaning of approximately 20 g/day in Landcorp Farming Ltd East Coast flocks over the 4 years of monitoring. Lambing percentage was harder to change, though individual farms increased lambing percentage by up to 35% by concentrating on increasing feed allocation and maintaining ewe body condition score during winter. Low liveweight in some two-tooth ewes was inversely related to the percentage of dries in a flock and prompted more emphasis on growing replacement stock. The programme shifted focus from short-term tactical feeding and management decisions to long-term strategies such as stock and sales policies that placed the breeding flock as the major priority. Keywords: breeding ewes, data, lambing percentage, lambs, liveweight gain, whole flock analysis.


Author(s):  
Nguyen Van Tan

This paper examines the impact of equitization on financial and operating performance of state-owned enterprises (SOEs) in Vietnam. Previous related privatization theories have not explained whether there is an improvement in financial and operating performance of equitized SOEs compared to non-equitized SOEs or not. This study proposes to use with-without comparison method through the average treatment effect measuring the impact of equitization on financial and operating performance of SOEs. By using data of 114 SOEs equitized in the period from 2012 to 2014, the author finds that equitized SOEs can not improve profitability, operating efficiency, and output when considering non-equitized SOEs. There is also no evidence for a reduction in the number of employees of equitized SOEs after equitization. These findings are in contrast to previous studies in Vietnam, but there are similarities with the results of studies in China. This is because equitized SOEs in the early post-equitization period in Vietnam are still monitored by the Vietnamese government, as well as the equitized enterprises in the period 2012-2014 are mainly large-scale ones with slow change of operating objectives, monitoring mechanism and weak competitiveness after equitization. However, equitization can help equitized SOEs operate more efficiently than non–equitized SOEs when considering non-listing status or industry group. This research provides implications for the Vietnamese government to encourage non-equitized enterprises to participate in the equitization program actively. The research results also help investors to have appropriate long-term investment strategies in equitized SOEs. This paper also has some limitations for further research.


2004 ◽  
Vol 19 (1) ◽  
pp. 2-3
Author(s):  
Gloria Leon ◽  
Melissa A. Polusny

In the immediate aftermath of disasters and terrorism, it is critical to rapidly respond to the physical/medical needs of survivors to reduce injuries and the loss of life. Consistent with these situational demands, the description of such events is usually in terms of the resulting number of casualties and physical injuries sustained, with little recognition of or attention to the potential psychosocial consequences that may be experienced by survivors. However, individuals exposed to natural and human-made disasters, including acts of terrorism and large-scale violence, may experience serious immediate and long-term psychological difficulties.


2019 ◽  
Vol 116 (10) ◽  
pp. 3988-3993 ◽  
Author(s):  
Behzad Tabibian ◽  
Utkarsh Upadhyay ◽  
Abir De ◽  
Ali Zarezade ◽  
Bernhard Schölkopf ◽  
...  

Spaced repetition is a technique for efficient memorization which uses repeated review of content following a schedule determined by a spaced repetition algorithm to improve long-term retention. However, current spaced repetition algorithms are simple rule-based heuristics with a few hard-coded parameters. Here, we introduce a flexible representation of spaced repetition using the framework of marked temporal point processes and then address the design of spaced repetition algorithms with provable guarantees as an optimal control problem for stochastic differential equations with jumps. For two well-known human memory models, we show that, if the learner aims to maximize recall probability of the content to be learned subject to a cost on the reviewing frequency, the optimal reviewing schedule is given by the recall probability itself. As a result, we can then develop a simple, scalable online spaced repetition algorithm, MEMORIZE, to determine the optimal reviewing times. We perform a large-scale natural experiment using data from Duolingo, a popular language-learning online platform, and show that learners who follow a reviewing schedule determined by our algorithm memorize more effectively than learners who follow alternative schedules determined by several heuristics.


2011 ◽  
Vol 4 (3) ◽  
pp. 2525-2565 ◽  
Author(s):  
A. J. Mannucci ◽  
C. O. Ao ◽  
X. Pi ◽  
B. A. Iijima

Abstract. We study the impact of large-scale ionospheric structure on the accuracy of radio occultation (RO) retrievals of atmospheric parameters such as refractivity and temperature. We use a climatological model of the ionosphere as well as an ionospheric data assimilation model to compare quiet and geomagnetically disturbed conditions. The largest contributor to ionospheric bias is physical separation of the two GPS frequencies as the GPS signal traverses the ionosphere and atmosphere. We analyze this effect in detail using ray-tracing and a full geophysical retrieval system. During quiet conditions, our results are similar to previously published studies. The impact of a major ionospheric storm is analyzed using data from the 30 October 2003 "Halloween" superstorm period. The temperature retrieval bias under disturbed conditions varies from 1 K to 2 K between 20 and 32 km altitude, compared to 0.2–0.3 K during quiet conditions. These results suggest the need for ionospheric monitoring as part of an RO-based climate observation strategy. We find that even during quiet conditions, the magnitude of retrieval bias depends critically on ionospheric conditions, which may explain variations in previously published bias estimates that use a variety of assumptions regarding large scale ionospheric structure. We quantify the impact of spacecraft orbit altitude on the magnitude of bending angle error. Satellites in higher altitude orbits (≧700 km) tend to have lower biases due to the tendency of the residual bending to cancel between the top and bottomside ionosphere. We conclude with remarks on the implications of this study for long-term climate monitoring using RO.


Stroke ◽  
2020 ◽  
Vol 51 (9) ◽  
Author(s):  
Hooman Kamel ◽  
Babak B. Navi ◽  
Neal S. Parikh ◽  
Alexander E. Merkler ◽  
Peter M. Okin ◽  
...  

Background and Purpose: One-fifth of ischemic strokes are embolic strokes of undetermined source (ESUS). Their theoretical causes can be classified as cardioembolic versus noncardioembolic. This distinction has important implications, but the categories’ proportions are unknown. Methods: Using data from the Cornell Acute Stroke Academic Registry, we trained a machine-learning algorithm to distinguish cardioembolic versus non-cardioembolic strokes, then applied the algorithm to ESUS cases to determine the predicted proportion with an occult cardioembolic source. A panel of neurologists adjudicated stroke etiologies using standard criteria. We trained a machine learning classifier using data on demographics, comorbidities, vitals, laboratory results, and echocardiograms. An ensemble predictive method including L1 regularization, gradient-boosted decision tree ensemble (XGBoost), random forests, and multivariate adaptive splines was used. Random search and cross-validation were used to tune hyperparameters. Model performance was assessed using cross-validation among cases of known etiology. We applied the final algorithm to an independent set of ESUS cases to determine the predicted mechanism (cardioembolic or not). To assess our classifier’s validity, we correlated the predicted probability of a cardioembolic source with the eventual post-ESUS diagnosis of atrial fibrillation. Results: Among 1083 strokes with known etiologies, our classifier distinguished cardioembolic versus noncardioembolic cases with excellent accuracy (area under the curve, 0.85). Applied to 580 ESUS cases, the classifier predicted that 44% (95% credibility interval, 39%–49%) resulted from cardiac embolism. Individual ESUS patients’ predicted likelihood of cardiac embolism was associated with eventual atrial fibrillation detection (OR per 10% increase, 1.27 [95% CI, 1.03–1.57]; c-statistic, 0.68 [95% CI, 0.58–0.78]). ESUS patients with high predicted probability of cardiac embolism were older and had more coronary and peripheral vascular disease, lower ejection fractions, larger left atria, lower blood pressures, and higher creatinine levels. Conclusions: A machine learning estimator that distinguished known cardioembolic versus noncardioembolic strokes indirectly estimated that 44% of ESUS cases were cardioembolic.


2017 ◽  
Vol 33 (6) ◽  
pp. 471-474
Author(s):  
Syd Hiskey ◽  
Fahime Javenbakht ◽  
Nicholas A. Troop

Abstract. We describe the development of a brief version of the Bi-Directional Changes in Being Scale (BCIBS; Hiskey, Troop, & Joseph, 2006 ), a measure of phenomenological change following stressful and traumatic life events. The psychometric properties of the mini-BCIBS were explored using data drawn from a sample of female students, survivors of a discotheque fire, and a large-scale Internet survey. Results suggest the new measure retains the breadth of experiences captured by its predecessor and is psychometrically equivalent. The new tool awaits further development among clinical samples and may help researchers explore the long-term trajectory of posttraumatic growth phenomena.


Sign in / Sign up

Export Citation Format

Share Document