Mapping landslides using drone's full-motion videos

Author(s):  
Ionut Cosmin Sandric ◽  
Viorel Ilinca ◽  
Radu Irimia ◽  
Zenaida Chitu ◽  
Marta Jurchescu ◽  
...  

<p>Rapid mapping of landslides plays an important role in both science and emergency management communities. It helps people to take the appropriate decisions in quasi-real-time and to diminish losses. With the increasing advancement in high-resolution satellite and aerial imagery, this task also increased the spatial accuracy, providing more and more accurate maps of landslide locations. In accordance with the latest developments in the fields of unmanned aerial vehicles and artificial intelligence, the current study is focused on providing an insight into the process of mapping landslides from full-motion videos and by means of artificial intelligence. To achieve this goal, several drone flights were performed over areas located in the Romanian Subcarpathians, using Quadro-Copters (DJI Phantom 4 and DJI Mavic 2 Enterprise) equipped with a 12 MP RGB camera. The flights were planned and executed to reach an optimal number of pictures and videos, taken from various angles and heights over the study areas. Using Structure from Motion techniques, each dataset was processed and orthorectified. Similarly, each video was processed and transformed into a full-motion video, having coordinates allocated to each frame. Samples of specific landslide features were collected by hand, using the pictures and the video frames, and used to create a complete database necessary to train a Mask RCNN model. The samples were divided into two different datasets, having 80% of them used for the training process and the rest of 20% for the validation process. The model was trained over 50 epochs and it reached an accuracy of approximately 86% on the training dataset and about 82% on the validation dataset. The study is part of an ongoing project, SlideMap 416PED, financed by UEFISCDI, Romania. More details about the project can be found at https://slidemap.geo-spatial.ro.</p>

2021 ◽  
Author(s):  
Ying-Shi Sun ◽  
Yu-Hong Qu ◽  
Dong Wang ◽  
Yi Li ◽  
Lin Ye ◽  
...  

Abstract Background: Computer-aided diagnosis using deep learning algorithms has been initially applied in the field of mammography, but there is no large-scale clinical application.Methods: This study proposed to develop and verify an artificial intelligence model based on mammography. Firstly, retrospectively collected mammograms from six centers were randomized to a training dataset and a validation dataset for establishing the model. Secondly, the model was tested by comparing 12 radiologists’ performance with and without it. Finally, prospectively multicenter mammograms were diagnosed by radiologists with the model. The detection and diagnostic capabilities were evaluated using the free-response receiver operating characteristic (FROC) curve and ROC curve.Results: The sensitivity of model for detecting lesion after matching was 0.908 for false positive rate of 0.25 in unilateral images. The area under ROC curve (AUC) to distinguish the benign from malignant lesions was 0.855 (95% CI: 0.830, 0.880). The performance of 12 radiologists with the model was higher than that of radiologists alone (AUC: 0.852 vs. 0.808, P = 0.005). The mean reading time of with the model was shorter than that of reading alone (80.18 s vs. 62.28 s, P = 0.03). In prospective application, the sensitivity of detection reached 0.887 at false positive rate of 0.25; the AUC of radiologists with the model was 0.983 (95% CI: 0.978, 0.988), with sensitivity, specificity, PPV, and NPV of 94.36%, 98.07%, 87.76%, and 99.09%, respectively.Conclusions: The artificial intelligence model exhibits high accuracy for detecting and diagnosing breast lesions, improves diagnostic accuracy and saves time.Trial registration: NCT, NCT03708978. Registered 17 April 2018, https://register.clinicaltrials.gov/prs/app/ NCT03708978


Author(s):  
Anifatul Faricha ◽  
M. Achirul Nanda ◽  
Siti Maghfirotul Ulyah ◽  
Ni'matut Tamimah ◽  
Enny Indasyah ◽  
...  

To know the prediction of disease outbreak, proper predictive modeling is required to represent the dataset. This study presents the comparative predictive modeling for predicting disease outbreak using two models i.e., optimizable support vector machine (SVM) and optimizable gaussian process regression (GPR). The dataset used in this study contains three cases i.e., positive cases, recovered cases, and death cases. The dataset at each case is divided into training dataset for the training process and external validation dataset for the validation process. Based on the training process and validation process, the root mean square error (RMSE) at positive cases, recovered cases, and death cases using optimizable GPR is substantially more effective for prediction than the optimizable SVM. According to the result performance, by applying optimizable GPR, the training process has the average RMSE of 19.54 and the validation process has the average RMSE of 15.85.


Author(s):  
James P. Howard ◽  
Catherine C. Stowell ◽  
Graham D. Cole ◽  
Kajaluxy Ananthan ◽  
Camelia D. Demetrescu ◽  
...  

Background: Artificial intelligence (AI) for echocardiography requires training and validation to standards expected of humans. We developed an online platform and established the Unity Collaborative to build a dataset of expertise from 17 hospitals for training, validation, and standardization of such techniques. Methods: The training dataset consisted of 2056 individual frames drawn at random from 1265 parasternal long-axis video-loops of patients undergoing clinical echocardiography in 2015 to 2016. Nine experts labeled these images using our online platform. From this, we trained a convolutional neural network to identify keypoints. Subsequently, 13 experts labeled a validation dataset of the end-systolic and end-diastolic frame from 100 new video-loops, twice each. The 26-opinion consensus was used as the reference standard. The primary outcome was precision SD, the SD of the differences between AI measurement and expert consensus. Results: In the validation dataset, the AI’s precision SD for left ventricular internal dimension was 3.5 mm. For context, precision SD of individual expert measurements against the expert consensus was 4.4 mm. Intraclass correlation coefficient between AI and expert consensus was 0.926 (95% CI, 0.904–0.944), compared with 0.817 (0.778–0.954) between individual experts and expert consensus. For interventricular septum thickness, precision SD was 1.8 mm for AI (intraclass correlation coefficient, 0.809; 0.729–0.967), versus 2.0 mm for individuals (intraclass correlation coefficient, 0.641; 0.568–0.716). For posterior wall thickness, precision SD was 1.4 mm for AI (intraclass correlation coefficient, 0.535 [95% CI, 0.379–0.661]), versus 2.2 mm for individuals (0.366 [0.288–0.462]). We present all images and annotations. This highlights challenging cases, including poor image quality and tapered ventricles. Conclusions: Experts at multiple institutions successfully cooperated to build a collaborative AI. This performed as well as individual experts. Future echocardiographic AI research should use a consensus of experts as a reference. Our collaborative welcomes new partners who share our commitment to publish all methods, code, annotations, and results openly.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Vitor Mendes Pereira ◽  
Yoni Donner ◽  
Gil Levi ◽  
Nicole Cancelliere ◽  
Erez Wasserman ◽  
...  

Cerebral Aneurysms (CAs) may occur in 5-10% of the population. They can be often missed because they require a very methodological diagnostic approach. We developed an algorithm using artificial intelligence to assist and supervise and detect CAs. Methods: We developed an automated algorithm to detect CAs. The algorithm is based on 3D convolutional neural network modeled as a U-net. We included all saccular CAs from 2014 to 2016 from a single center. Normal and pathological datasets were prepared and annotated in 3D using an in-house developed platform. To assess the accuracy and to optimize the model, we assessed preliminary results using a validation dataset. After the algorithm was trained, a dataset was used to evaluate final IA detection and aneurysm measurements. The accuracy of the algorithm was derived using ROC curves and Pearson correlation tests. Results: We used 528 CTAs with 674 aneurysms at the following locations: ACA (3%), ACA/ACOM (26.1%), ICA/MCA (26.3%), MCA (29.4%), PCA/PCOM (2.3%), Basilar (6.6%), Vertebral (2.3%) and other (3.7%). Training datasets consisted of 189 CA scans. We plotted ROC curves and achieved an AUC of 0.85 for unruptured and 0.88 for ruptured CAs. We improved the model performance by increasing the training dataset employing various methods of data augmentation to leverage the data to its fullest. The final model tested was performed in 528 CTAs using 5-fold cross-validation and an additional set of 2400 normal CTAs. There was a significant improvement compared to the initial assessment, with an AUC of 0.93 for unruptured and 0.94 for ruptured. The algorithm detected larger aneurysms more accurately, reaching an AUC of 0.97 and a 91.5% specificity at 90% sensitivity for aneurysms larger than 7mm. Also, the algorithm accurately detected CAs in the following locations: basilar(AUC of 0.97) and MCA/ACOM (AUC of 0.94). The volume measurement (mm3) by the model compared to the annotated one achieved a Pearson correlation of 99.36. Conclusion: The Viz.ai aneurysm algorithm was able to detect and measure ruptured and unruptured CAs in consecutive CTAs. The model has demonstrated that a deep learning AI algorithm can achieve clinically useful levels of accuracy for clinical decision support.


Author(s):  
Miguel Mascarenhas Saraiva ◽  
Tiago Ribeiro ◽  
João Afonso ◽  
João P.S. Ferreira ◽  
Hélder Cardoso ◽  
...  

<b><i>Introduction:</i></b> Capsule endoscopy has revolutionized the management of patients with obscure gastrointestinal bleeding. Nevertheless, reading capsule endoscopy images is time-consuming and prone to overlooking significant lesions, thus limiting its diagnostic yield. We aimed to create a deep learning algorithm for automatic detection of blood and hematic residues in the enteric lumen in capsule endoscopy exams. <b><i>Methods:</i></b> A convolutional neural network was developed based on a total pool of 22,095 capsule endoscopy images (13,510 images containing luminal blood and 8,585 of normal mucosa or other findings). A training dataset comprising 80% of the total pool of images was defined. The performance of the network was compared to a consensus classification provided by 2 specialists in capsule endoscopy. Subsequently, we evaluated the performance of the network using an independent validation dataset (20% of total image pool), calculating its sensitivity, specificity, accuracy, and precision. <b><i>Results:</i></b> Our convolutional neural network detected blood and hematic residues in the small bowel lumen with an accuracy and precision of 98.5 and 98.7%, respectively. The sensitivity and specificity were 98.6 and 98.9%, respectively. The analysis of the testing dataset was completed in 24 s (approximately 184 frames/s). <b><i>Discussion/Conclusion:</i></b> We have developed an artificial intelligence tool capable of effectively detecting luminal blood. The development of these tools may enhance the diagnostic accuracy of capsule endoscopy when evaluating patients presenting with obscure small bowel bleeding.


Endoscopy ◽  
2020 ◽  
Vol 52 (12) ◽  
pp. 1077-1083 ◽  
Author(s):  
Ken Namikawa ◽  
Toshiaki Hirasawa ◽  
Kaoru Nakano ◽  
Yohei Ikenoyama ◽  
Mitsuaki Ishioka ◽  
...  

Abstract Background We previously reported for the first time the usefulness of artificial intelligence (AI) systems in detecting gastric cancers. However, the “original convolutional neural network (O-CNN)” employed in the previous study had a relatively low positive predictive value (PPV). Therefore, we aimed to develop an advanced AI-based diagnostic system and evaluate its applicability for the classification of gastric cancers and gastric ulcers. Methods We constructed an “advanced CNN” (A-CNN) by adding a new training dataset (4453 gastric ulcer images from 1172 lesions) to the O-CNN, which had been trained using 13 584 gastric cancer and 373 gastric ulcer images. The diagnostic performance of the A-CNN in terms of classifying gastric cancers and ulcers was retrospectively evaluated using an independent validation dataset (739 images from 100 early gastric cancers and 720 images from 120 gastric ulcers) and compared with that of the O-CNN by estimating the overall classification accuracy. Results The sensitivity, specificity, and PPV of the A-CNN in classifying gastric cancer at the lesion level were 99.0 % (95 % confidence interval [CI] 94.6 %−100 %), 93.3 % (95 %CI 87.3 %−97.1 %), and 92.5 % (95 %CI 85.8 %−96.7 %), respectively, and for classifying gastric ulcers were 93.3 % (95 %CI 87.3 %−97.1 %), 99.0 % (95 %CI 94.6 %−100 %), and 99.1 % (95 %CI 95.2 %−100 %), respectively. At the lesion level, the overall accuracies of the O- and A-CNN for classifying gastric cancers and gastric ulcers were 45.9 % (gastric cancers 100 %, gastric ulcers 0.8 %) and 95.9 % (gastric cancers 99.0 %, gastric ulcers 93.3 %), respectively. Conclusion The newly developed AI-based diagnostic system can effectively classify gastric cancers and gastric ulcers.


2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


This book is the first to examine the history of imaginative thinking about intelligent machines. As real artificial intelligence (AI) begins to touch on all aspects of our lives, this long narrative history shapes how the technology is developed, deployed, and regulated. It is therefore a crucial social and ethical issue. Part I of this book provides a historical overview from ancient Greece to the start of modernity. These chapters explore the revealing prehistory of key concerns of contemporary AI discourse, from the nature of mind and creativity to issues of power and rights, from the tension between fascination and ambivalence to investigations into artificial voices and technophobia. Part II focuses on the twentieth and twenty-first centuries in which a greater density of narratives emerged alongside rapid developments in AI technology. These chapters reveal not only how AI narratives have consistently been entangled with the emergence of real robotics and AI, but also how they offer a rich source of insight into how we might live with these revolutionary machines. Through their close textual engagements, these chapters explore the relationship between imaginative narratives and contemporary debates about AI’s social, ethical, and philosophical consequences, including questions of dehumanization, automation, anthropomorphization, cybernetics, cyberpunk, immortality, slavery, and governance. The contributions, from leading humanities and social science scholars, show that narratives about AI offer a crucial epistemic site for exploring contemporary debates about these powerful new technologies.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Guoliang Jia ◽  
Zheyu Song ◽  
Zhonghang Xu ◽  
Youmao Tao ◽  
Yuanyu Wu ◽  
...  

Abstract Background Bioinformatics was used to analyze the skin cutaneous melanoma (SKCM) gene expression profile to provide a theoretical basis for further studying the mechanism underlying metastatic SKCM and the clinical prognosis. Methods We downloaded the gene expression profiles of 358 metastatic and 102 primary (nonmetastatic) CM samples from The Cancer Genome Atlas (TCGA) database as a training dataset and the GSE65904 dataset from the National Center for Biotechnology Information database as a validation dataset. Differentially expressed genes (DEGs) were screened using the limma package of R3.4.1, and prognosis-related feature DEGs were screened using Logit regression (LR) and survival analyses. We also used the STRING online database, Cytoscape software, and Database for Annotation, Visualization and Integrated Discovery software for protein–protein interaction network, Gene Ontology, and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses based on the screened DEGs. Results Of the 876 DEGs selected, 11 (ZNF750, NLRP6, TGM3, KRTDAP, CAMSAP3, KRT6C, CALML5, SPRR2E, CD3G, RTP5, and FAM83C) were screened using LR analysis. The survival prognosis of nonmetastatic group was better compared to the metastatic group between the TCGA training and validation datasets. The 11 DEGs were involved in 9 KEGG signaling pathways, and of these 11 DEGs, CALML5 was a feature DEG involved in the melanogenesis pathway, 12 targets of which were collected. Conclusion The feature DEGs screened, such as CALML5, are related to the prognosis of metastatic CM according to LR. Our results provide new ideas for exploring the molecular mechanism underlying CM metastasis and finding new diagnostic prognostic markers.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kara-Louise Royle ◽  
David A. Cairns

Abstract Background The United Kingdom Myeloma Research Alliance (UK-MRA) Myeloma Risk Profile is a prognostic model for overall survival. It was trained and tested on clinical trial data, aiming to improve the stratification of transplant ineligible (TNE) patients with newly diagnosed multiple myeloma. Missing data is a common problem which affects the development and validation of prognostic models, where decisions on how to address missingness have implications on the choice of methodology. Methods Model building The training and test datasets were the TNE pathways from two large randomised multicentre, phase III clinical trials. Potential prognostic factors were identified by expert opinion. Missing data in the training dataset was imputed using multiple imputation by chained equations. Univariate analysis fitted Cox proportional hazards models in each imputed dataset with the estimates combined by Rubin’s rules. Multivariable analysis applied penalised Cox regression models, with a fixed penalty term across the imputed datasets. The estimates from each imputed dataset and bootstrap standard errors were combined by Rubin’s rules to define the prognostic model. Model assessment Calibration was assessed by visualising the observed and predicted probabilities across the imputed datasets. Discrimination was assessed by combining the prognostic separation D-statistic from each imputed dataset by Rubin’s rules. Model validation The D-statistic was applied in a bootstrap internal validation process in the training dataset and an external validation process in the test dataset, where acceptable performance was pre-specified. Development of risk groups Risk groups were defined using the tertiles of the combined prognostic index, obtained by combining the prognostic index from each imputed dataset by Rubin’s rules. Results The training dataset included 1852 patients, 1268 (68.47%) with complete case data. Ten imputed datasets were generated. Five hundred twenty patients were included in the test dataset. The D-statistic for the prognostic model was 0.840 (95% CI 0.716–0.964) in the training dataset and 0.654 (95% CI 0.497–0.811) in the test dataset and the corrected D-Statistic was 0.801. Conclusion The decision to impute missing covariate data in the training dataset influenced the methods implemented to train and test the model. To extend current literature and aid future researchers, we have presented a detailed example of one approach. Whilst our example is not without limitations, a benefit is that all of the patient information available in the training dataset was utilised to develop the model. Trial registration Both trials were registered; Myeloma IX-ISRCTN68454111, registered 21 September 2000. Myeloma XI-ISRCTN49407852, registered 24 June 2009.


Sign in / Sign up

Export Citation Format

Share Document