Cloud Governance

Author(s):  
Anustup Mukherjee ◽  
Harjeet Kaur

Artificial intelligence within the area of computer vision is creating a replacement genre in detection industry. Here, AI is using the power of computer vision in creating advanced educational software LMS that detects student emotions during online classes, interviews, and judges their understanding and concentration level. It also generates automated content in step with their needs. This LMS cannot only judge audio, video, and image of a student; it also judges the voice tone. Through this judgement, the AI model understands how much a student is learning, effectivity, intellect, and drawbacks. In this chapter, the power of deep learning models VGG Net and Alex Net in LMS computer vision are used. This LMS architecture will be able to work like a virtual teacher that will be taking a parental guide to students.

2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

&lt;p&gt;Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model&amp;#8217;s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.&lt;/p&gt;


2021 ◽  
Vol 6 (5) ◽  
pp. 10-15
Author(s):  
Ela Bhattacharya ◽  
D. Bhattacharya

COVID-19 has emerged as the latest worrisome pandemic, which is reported to have its outbreak in Wuhan, China. The infection spreads by means of human contact, as a result, it has caused massive infections across 200 countries around the world. Artificial intelligence has likewise contributed to managing the COVID-19 pandemic in various aspects within a short span of time. Deep Neural Networks that are explored in this paper have contributed to the detection of COVID-19 from imaging sources. The datasets, pre-processing, segmentation, feature extraction, classification and test results which can be useful for discovering future directions in the domain of automatic diagnosis of the disease, utilizing artificial intelligence-based frameworks, have been investigated in this paper.


2019 ◽  
Vol 15 (11) ◽  
pp. 155014771988313 ◽  
Author(s):  
Zishuo Zhou ◽  
Zahid Akhtar ◽  
Ka Lok Man ◽  
Kamran Siddique

To enhance the safety and stability of autonomous vehicles, we present a deep learning platooning-based video information-sharing Internet of Things framework in this study. The proposed Internet of Things framework incorporates concepts and mechanisms from several domains of computer science, such as computer vision, artificial intelligence, sensor technology, and communication technology. The information captured by camera, such as road edges, traffic lights, and zebra lines, is highlighted using computer vision. The semantics of highlighted information is recognized by artificial intelligence. Sensors provide information on the direction and distance of obstacles, as well as their speed and moving direction. The communication technology is applied to share the information among the vehicles. Since vehicles have high probability to encounter accidents in congested locations, the proposed system enables vehicles to perform self-positioning with other vehicles in a certain range to reinforce their safety and stability. The empirical evaluation shows the viability and efficacy of the proposed system in such situations. Moreover, the collision time is decreased considerably compared with that when using traditional systems.


Author(s):  
Mehreen Sirshar ◽  
Syeda Hafsa Ali ◽  
Haleema Sadia Baig

Over the last few decades there has been an exponential growth in IT, motivating IT professionals and scientists to explore new dimensions resulting in the advancement of artificial intelligence and its subcategories like computer vision, deep learning and augmented reality. AR is comparatively a new area which was initially explored for gaming but recently a lot of work has been done in education using AR. Most of this focuses on improving students understanding and motivation. Like any other project, the performance of an AR based project is determined by the customer satisfaction which is usually affected by the theory of triple constraints; cost, time and scope. many studies have shown that most of the projects are under development because they are unable to overcome these constraints and meet project objectives. We were unable to find any notable work done regarding project management for augmented reality systems and application. Therefore, in this paper, we propose a system for management of AR applications which mainly focuses on catering triple constraints to meet desired objectives. Each variable is further divided into subprocesses and by following these processes successful completion of the project can be achieved.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sergei Bedrikovetski ◽  
Nagendra N. Dudi-Venkata ◽  
Hidde M. Kroon ◽  
Warren Seow ◽  
Ryash Vather ◽  
...  

Abstract Background Artificial intelligence (AI) is increasingly being used in medical imaging analysis. We aimed to evaluate the diagnostic accuracy of AI models used for detection of lymph node metastasis on pre-operative staging imaging for colorectal cancer. Methods A systematic review was conducted according to PRISMA guidelines using a literature search of PubMed (MEDLINE), EMBASE, IEEE Xplore and the Cochrane Library for studies published from January 2010 to October 2020. Studies reporting on the accuracy of radiomics models and/or deep learning for the detection of lymph node metastasis in colorectal cancer by CT/MRI were included. Conference abstracts and studies reporting accuracy of image segmentation rather than nodal classification were excluded. The quality of the studies was assessed using a modified questionnaire of the QUADAS-2 criteria. Characteristics and diagnostic measures from each study were extracted. Pooling of area under the receiver operating characteristic curve (AUROC) was calculated in a meta-analysis. Results Seventeen eligible studies were identified for inclusion in the systematic review, of which 12 used radiomics models and five used deep learning models. High risk of bias was found in two studies and there was significant heterogeneity among radiomics papers (73.0%). In rectal cancer, there was a per-patient AUROC of 0.808 (0.739–0.876) and 0.917 (0.882–0.952) for radiomics and deep learning models, respectively. Both models performed better than the radiologists who had an AUROC of 0.688 (0.603 to 0.772). Similarly in colorectal cancer, radiomics models with a per-patient AUROC of 0.727 (0.633–0.821) outperformed the radiologist who had an AUROC of 0.676 (0.627–0.725). Conclusion AI models have the potential to predict lymph node metastasis more accurately in rectal and colorectal cancer, however, radiomics studies are heterogeneous and deep learning studies are scarce. Trial registration PROSPERO CRD42020218004.


2018 ◽  
Author(s):  
Reem Elsousy ◽  
Nagarajan Kathiresan ◽  
Sabri Boughorbel

AbstractThe success of deep learning has been shown in various fields including computer vision, speech recognition, natural language processing and bioinformatics. The advance of Deep Learning in Computer Vision has been an important source of inspiration for other research fields. The objective of this work is to adapt known deep learning models borrowed from computer vision such as VGGNet, Resnet and AlexNet for the classification of biological sequences. In particular, we are interested by the task of splice site identification based on raw DNA sequences. We focus on the role of model architecture depth on model training and classification performance.We show that deep learning models outperform traditional classification methods (SVM, Random Forests, and Logistic Regression) for large training sets of raw DNA sequences. Three model families are analyzed in this work namely VGGNet, AlexNet and ResNet. Three depth levels are defined for each model family. The models are benchmarked using the following metrics: Area Under ROC curve (AUC), Number of model parameters, number of floating operations. Our extensive experimental evaluation show that shallow architectures have an overall better performance than deep models. We introduced a shallow version of ResNet, named S-ResNet. We show that it gives a good trade-off between model complexity and classification performance.Author summaryDeep Learning has been widely applied to various fields in research and industry. It has been also succesfully applied to genomics and in particular to splice site identification. We are interested in the use of advanced neural networks borrowed from computer vision. We explored well-known models and their usability for the problem of splice site identification from raw sequences. Our extensive experimental analysis shows that shallow models outperform deep models. We introduce a new model called S-ResNet, which gives a good trade-off between computational complexity and classification accuracy.


2021 ◽  
Author(s):  
Wataru Uegami ◽  
Andrey Bychkov ◽  
Mutsumi Ozasa ◽  
Kazuki Uehara ◽  
Kensuke Kataoka ◽  
...  

Interstitial pneumonia is a heterogeneous disease with a progressive course and poor prognosis, at times even worse than those in the main cancer types. Histopathological examination is crucial for its diagnosis and estimation of prognosis. However, the evaluation strongly depends on the experience of pathologists, and the reproducibility of diagnosis is low. Herein, we propose MIXTURE (huMan-In-the-loop eXplainable artificial intelligence Through the Use of REcurrent training), a method to develop deep learning models for extracting pathologically significant findings based on an expert pathologist's perspective with a small annotation effort. The procedure of MIXTURE consists of three steps as follows. First, we created feature extractors for tiles from whole slide images using self-supervised learning. The similar looking tiles were clustered based on the output features and then pathologists integrated the pathologically synonymous clusters. Using the integrated clusters as labeled data, deep learning models to classify the tiles into pathological findings were created by transfer-learning the feature extractors. We developed three models for different magnifications. Using these extracted findings, our model was able to predict the diagnosis of usual interstitial pneumonia, a finding suggestive of progressive disease, with high accuracy (AUC 0.90). This high accuracy could not be achieved without the integration of findings by pathologists. The patients predicted as UIP had significantly poorer prognosis (five-year overall survival [OS]: 55.4% than those predicted as non-UIP (OS: 95.2%). The Cox proportional hazards model for each microscopic finding and prognosis pointed out dense fibrosis, fibroblastic foci, elastosis, and lymphocyte aggregation as independent risk factors. We suggest that MIXTURE may serve as a model approach to different diseases evaluated by medical imaging, including pathology and radiology, and be the prototype for artificial intelligence that can collaborate with humans.


Sign in / Sign up

Export Citation Format

Share Document