Machine learning and deep neural network — Artificial intelligence core for lab and real-world test and validation for ADAS and autonomous vehicles: AI for efficient and quality test and validation

Author(s):  
Harsha Jakkanahalli Vishnukumar ◽  
Bjorn Butting ◽  
Christian Muller ◽  
Eric Sax
2020 ◽  
Vol 23 (6) ◽  
pp. 1172-1191
Author(s):  
Artem Aleksandrovich Elizarov ◽  
Evgenii Viktorovich Razinkov

Recently, such a direction of machine learning as reinforcement learning has been actively developing. As a consequence, attempts are being made to use reinforcement learning for solving computer vision problems, in particular for solving the problem of image classification. The tasks of computer vision are currently one of the most urgent tasks of artificial intelligence. The article proposes a method for image classification in the form of a deep neural network using reinforcement learning. The idea of ​​the developed method comes down to solving the problem of a contextual multi-armed bandit using various strategies for achieving a compromise between exploitation and research and reinforcement learning algorithms. Strategies such as -greedy, -softmax, -decay-softmax, and the UCB1 method, and reinforcement learning algorithms such as DQN, REINFORCE, and A2C are considered. The analysis of the influence of various parameters on the efficiency of the method is carried out, and options for further development of the method are proposed.


Author(s):  
Aravind R Kashyap

This project considers the operational impact of Autonomous Vehicles by creating a corridor using the latest network available. The behaviour of these vehicles entering the corridor is monitored at the macroscopic level by modifying the data which can be extracted from the vehicle. This data is made to learn using machine learning called the Time Series Neural Network and the data is used as a parameter to make the vehicles Autonomous. The project resolves the location, develops and demonstrates the collision avoidance of the vehicles using Artificial Intelligence. Autonomous means the vehicles will be able to learn to act accordingly without human intervention


Author(s):  
Dhairya Shah

Abstract: Vehicle positioning and classification is a vital technology in intelligent transportation and self-driving cars. This paper describes the experimentation for the classification of vehicle images by artificial vision using Keras and TensorFlow to construct a deep neural network model, Python modules, as well as a machine learning algorithm. Image classification finds its suitability in applications ranging from medical diagnostics to autonomous vehicles. The existing architectures are computationally exhaustive, complex, and less accurate. The outcomes are used to assess the best camera location for filming, the vehicular traffic to determine the highway occupancy. An accurate, simple, and hardware-efficient architecture is required to be developed for image classification. Keywords: Convolutional Neural Networks, Image Classification, deep neural network, Keras, Tensorflow, Python, machine learning, dataset


2020 ◽  
Author(s):  
Mustafa Umit Oner ◽  
Yi-Chih Cheng ◽  
Hwee Kuan Lee ◽  
Wing-Kin Sung

This article discusses the effect of segregation of histopathology images data into three sets; training set for training machine learning model, validation set for model selection and test set for testing model performance. We found that one must be cautious when segregating histological images data (slides) into training, validation and test sets because subtle mishandling of data can introduce data leakage and gives illusively good results on the test set. We performed this study on gene mutation prediction performance by using the deep neural network in the paper of Coudray et al. [1]. By using the provided code and the same set of data, we discovered that data segregation method of the paper suffered from a data leakage problem [2]. The paper pools all the slides from all patients and then segregates them exclusively into training, validation and test sets. In this way, none of the slides is used in more than one set. This seems to be a clean separation of the data. However, the paper did not consider that some slides were strongly correlated. For example, if the tumor of a patient is cut and stained to produce multiple slides, these slides are strongly correlated. If one slide is used for training and another one is used for testing, essentially, the deep neural network can memorize the pattern on the slide in the training set and apply this memory on the slide in the test set. Hence, by memorization, the deep neural network can predict very well on the slide in the test set. This mechanism of prediction is not useful in a practical clinical setting since no two tumors are the same in the real world. In this real setting, we demand the deep neural network to generalize across patients and tumors. Hereafter, we call this way of data segregation slide-level segregation. There is a better way to perform data segregation that is compatible for deployment of deep learning model in practical clinical settings. First, the patients are segregated exclusively into training, validation and test sets. All the slides belonging to the patients in the training set are used solely for training. Similarly, all the slides belonging to the patients in the test set are used for testing only. Segregation of data in this way forces the deep neural network to generalize across patients. We call this way of data segregation patient-level segregation.In slide-level segregation approach analysis, we obtained similar results to that presented in the paper by Coudray et al. [1]: overall performance on the test set was good. However, it was illusory due to data leakage. The model gave very good testing results on the slides that come from a patient who also has slides in the training set. On the other hand, the test result was quite bad on the slides that come from a patient who does not have any slides in the training set. Hereafter, we call the slide in the test set as seen-patient data if the corresponding patient also has some slides in the training set. Otherwise, the slide in the test set is called unseen-patient data if the corresponding patient does not have slides in the training set. Furthermore, we analyzed performance of the model on the data segregated by the patient-level segregation approach. Note that, in this approach, all patients in the test set mimics the real world clinical workflow. We observed a significant drop in the performance of the model on the test set of patient-level segregation approach compared to the performance on the test set of slide-level segregation approach. Moreover, the performance of the model on the test set of patient-level segregation approach was very similar to the performance on the unseen-patients data in the test set of slide-level segregation approach. Hence, we conclude that patient-level segregation approach is crucial and appropriate to simulate real world scenario, where each patient in the test set can be thought as a patient walking into clinic tomorrow.


2020 ◽  
Vol 8 (10) ◽  
pp. 766
Author(s):  
Dohan Oh ◽  
Julia Race ◽  
Selda Oterkus ◽  
Bonguk Koo

Mechanical damage is recognized as a problem that reduces the performance of oil and gas pipelines and has been the subject of continuous research. The artificial neural network in the spotlight recently is expected to be another solution to solve the problems relating to the pipelines. The deep neural network, which is on the basis of artificial neural network algorithm and is a method amongst various machine learning methods, is applied in this study. The applicability of machine learning techniques such as deep neural network for the prediction of burst pressure has been investigated for dented API 5L X-grade pipelines. To this end, supervised learning is employed, and the deep neural network model has four layers with three hidden layers, and the neural network uses the fully connected layer. The burst pressure computed by deep neural network model has been compared with the results of finite element analysis based parametric study, and the burst pressure calculated by the experimental results. According to the comparison results, it showed good agreement. Therefore, it is concluded that deep neural networks can be another solution for predicting the burst pressure of API 5L X-grade dented pipelines.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2019 ◽  
Vol 10 (36) ◽  
pp. 8374-8383 ◽  
Author(s):  
Mohammad Atif Faiz Afzal ◽  
Aditya Sonpal ◽  
Mojtaba Haghighatlari ◽  
Andrew J. Schultz ◽  
Johannes Hachmann

Computational pipeline for the accelerated discovery of organic materials with high refractive index via high-throughput screening and machine learning.


Author(s):  
J.-M. Deltorn ◽  
Franck Macrez

A new generation of machine learning (ML) and artificial intelligence (AI) creative tools are now at the disposal of musicians, professionals and amateurs alike. These new technical intermediaries allow the production of unprecedented forms of compositions, from generating new works by mimicking a style or by mixing a curated ensemble of musical works to letting an algorithm complete one’s own creation in unexpected directions or by letting an artist interact with the parameters of a neural network to explore fresh musical avenues. Unsurprisingly, this new spectrum of algorithmic compositions question both the nature and the degree of involvement of the creator in the musical work. As a consequence, the issue of authorship and, in particular, the assessment of the specific contribution of a (human) creator through the algorithmic pipeline may require special scrutiny when AI and ML tools are used to produce musical works.


2020 ◽  
Author(s):  
Muhammad Afzal ◽  
Fakhare Alam ◽  
Khalid Mahmood Malik ◽  
Ghaus M Malik

BACKGROUND Automatic text summarization (ATS) enables users to retrieve meaningful evidence from big data of biomedical repositories to make complex clinical decisions. Deep neural and recurrent networks outperform traditional machine-learning techniques in areas of natural language processing and computer vision; however, they are yet to be explored in the ATS domain, particularly for medical text summarization. OBJECTIVE Traditional approaches in ATS for biomedical text suffer from fundamental issues such as an inability to capture clinical context, quality of evidence, and purpose-driven selection of passages for the summary. We aimed to circumvent these limitations through achieving precise, succinct, and coherent information extraction from credible published biomedical resources, and to construct a simplified summary containing the most informative content that can offer a review particular to clinical needs. METHODS In our proposed approach, we introduce a novel framework, termed Biomed-Summarizer, that provides quality-aware Patient/Problem, Intervention, Comparison, and Outcome (PICO)-based intelligent and context-enabled summarization of biomedical text. Biomed-Summarizer integrates the prognosis quality recognition model with a clinical context–aware model to locate text sequences in the body of a biomedical article for use in the final summary. First, we developed a deep neural network binary classifier for quality recognition to acquire scientifically sound studies and filter out others. Second, we developed a bidirectional long-short term memory recurrent neural network as a clinical context–aware classifier, which was trained on semantically enriched features generated using a word-embedding tokenizer for identification of meaningful sentences representing PICO text sequences. Third, we calculated the similarity between query and PICO text sequences using Jaccard similarity with semantic enrichments, where the semantic enrichments are obtained using medical ontologies. Last, we generated a representative summary from the high-scoring PICO sequences aggregated by study type, publication credibility, and freshness score. RESULTS Evaluation of the prognosis quality recognition model using a large dataset of biomedical literature related to intracranial aneurysm showed an accuracy of 95.41% (2562/2686) in terms of recognizing quality articles. The clinical context–aware multiclass classifier outperformed the traditional machine-learning algorithms, including support vector machine, gradient boosted tree, linear regression, K-nearest neighbor, and naïve Bayes, by achieving 93% (16127/17341) accuracy for classifying five categories: aim, population, intervention, results, and outcome. The semantic similarity algorithm achieved a significant Pearson correlation coefficient of 0.61 (0-1 scale) on a well-known BIOSSES dataset (with 100 pair sentences) after semantic enrichment, representing an improvement of 8.9% over baseline Jaccard similarity. Finally, we found a highly positive correlation among the evaluations performed by three domain experts concerning different metrics, suggesting that the automated summarization is satisfactory. CONCLUSIONS By employing the proposed method Biomed-Summarizer, high accuracy in ATS was achieved, enabling seamless curation of research evidence from the biomedical literature to use for clinical decision-making.


2018 ◽  
Author(s):  
Jingxiang Shen ◽  
Mariela D. Petkova ◽  
Yuhai Tu ◽  
Feng Liu ◽  
Chao Tang

AbstractComplex biological functions are carried out by the interaction of genes and proteins. Uncovering the gene regulation network behind a function is one of the central themes in biology. Typically, it involves extensive experiments of genetics, biochemistry and molecular biology. In this paper, we show that much of the inference task can be accomplished by a deep neural network (DNN), a form of machine learning or artificial intelligence. Specifically, the DNN learns from the dynamics of the gene expression. The learnt DNN behaves like an accurate simulator of the system, on which one can perform in-silico experiments to reveal the underlying gene network. We demonstrate the method with two examples: biochemical adaptation and the gap-gene patterning in fruit fly embryogenesis. In the first example, the DNN can successfully find the two basic network motifs for adaptation – the negative feedback and the incoherent feed-forward. In the second and much more complex example, the DNN can accurately predict behaviors of essentially all the mutants. Furthermore, the regulation network it uncovers is strikingly similar to the one inferred from experiments. In doing so, we develop methods for deciphering the gene regulation network hidden in the DNN “black box”. Our interpretable DNN approach should have broad applications in genotype-phenotype mapping.SignificanceComplex biological functions are carried out by gene regulation networks. The mapping between gene network and function is a central theme in biology. The task usually involves extensive experiments with perturbations to the system (e.g. gene deletion). Here, we demonstrate that machine learning, or deep neural network (DNN), can help reveal the underlying gene regulation for a given function or phenotype with minimal perturbation data. Specifically, after training with wild-type gene expression dynamics data and a few mutant snapshots, the DNN learns to behave like an accurate simulator for the genetic system, which can be used to predict other mutants’ behaviors. Furthermore, our DNN approach is biochemically interpretable, which helps uncover possible gene regulatory mechanisms underlying the observed phenotypic behaviors.


Sign in / Sign up

Export Citation Format

Share Document