scholarly journals Visual Question Answering using Convolutional Neural Networks

Author(s):  
K. P. Moholkar, Et. al.

The ability of a computer system to be able to understand surroundings and elements and to think like a human being to process the information has always been the major point of focus in the field of Computer Science. One of the ways to achieve this artificial intelligence is Visual Question Answering. Visual Question Answering (VQA) is a trained system which can answer the questions associated to a given image in Natural Language. VQA is a generalized system which can be used in any image-based scenario with adequate training on the relevant data. This is achieved with the help of Neural Networks, particularly Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In this study, we have compared different approaches of VQA, out of which we are exploring CNN based model. With the continued progress in the field of Computer Vision and Question answering system, Visual Question Answering is becoming the essential system which can handle multiple scenarios with their respective data.

Author(s):  
Veeraraghavan Jagannathan

Question Answering (QA) has become one of the most significant information retrieval applications. Despite that, most of the question answering system focused to increase the user experience in finding the relevant result. Due to the continuous increase of web content, retrieving the relevant result faces a challenging issue in the Question Answering System (QAS). Thus, an effective Question Classification (QC), and retrieval approach named Bayesian probability and Tanimoto-based Recurrent Neural Network (RNN) are proposed in this research to differentiate the types of questions more efficiently. This research presented an analysis of different types of questions with respect to the grammatical structures. Various patterns are identified from the questions and the RNN classifier is used to classify the questions. The results obtained by the proposed Bayesian probability and Tanimoto-based RNN showed that the syntactic categories related to the domain-specific types of proper nouns, numeral numbers, and the common nouns enable the RNN classifier to reveal better result for different types of questions. However, the proposed approach obtained better performance in terms of precision, recall, and F-measure with the values of 90.14, 86.301, and 90.936 using dataset-2.


Author(s):  
Pratheek I ◽  
Joy Paulose

<p>Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth.</p>


2020 ◽  
Vol 3 (1) ◽  
pp. 138-146
Author(s):  
Subash Pandey ◽  
Rabin Kumar Dhamala ◽  
Bikram Karki ◽  
Saroj Dahal ◽  
Rama Bastola

 Automatically generating a natural language description of an image is a major challenging task in the field of artificial intelligence. Generating description of an image bring together the fields: Natural Language Processing and Computer Vision. There are two types of approaches i.e. top-down and bottom-up. For this paper, we approached top-down that starts from the image and converts it into the word. Image is passed to Convolutional Neural Network (CNN) encoder and the output from it is fed further to Recurrent Neural Network (RNN) decoder that generates meaningful captions. We generated the image description by passing the real time images from the camera of a smartphone as well as tested with the test images from the dataset. To evaluate the model performance, we used BLEU (Bilingual Evaluation Understudy) score and match predicted words to the original caption.


2021 ◽  
Author(s):  
Callum Newman ◽  
Jon Petzing ◽  
Yee Mey Goh ◽  
Laura Justham

Artificial intelligence in computer vision has focused on improving test performance using techniques and architectures related to deep neural networks. However, improvements can also be achieved by carefully selecting the training dataset images. Environmental factors, such as light intensity, affect the image’s appearance and by choosing optimal factor levels the neural network’s performance can improve. However, little research into processes which help identify optimal levels is available. This research presents a case study which uses a process for developing an optimised dataset for training an object detection neural network. Images are gathered under controlled conditions using multiple factors to construct various training datasets. Each dataset is used to train the same neural network and the test performance compared to identify the optimal factors. The opportunity to use synthetic images is introduced, which has many advantages including creating images when real-world images are unavailable, and more easily controlled factors.


Author(s):  
Ali Sami Sosa ◽  
Saja Majeed Mohammed ◽  
Haider Hadi Abbas ◽  
Israa Al Barazanchi

Recent years have witnessed the success of artificial intelligence–based automated systems that use deep learning, especially recurrent neural network-based models, on many natural language processing problems, including machine translation and question answering. Besides, recurrent neural networks and their variations have been extensively studied with respect to several graph problems and have shown preliminary success. Despite these successes, recurrent neural network -based models continue to suffer from several major drawbacks. First, they can only consume sequential data; thus, linearization is required to serialize input graphs, resulting in the loss of important structural information. In particular, graph nodes that are originally located closely to each other can be very far away after linearization, and this introduces great challenges for recurrent neural networks to model their relation. Second, the serialization results are usually very long, so it takes a long time for recurrent neural networks to encode them. In the methodology of this study, we made the resulting graphs more densely connected so that more useful facts could be inferred, and the problem of graphical natural language processing could be easily decoded with graph recurrent neural network. As a result, the performances with single-typed edges were significantly better than the Local baseline, whereas the combination of all types of edges achieved a much better accuracy than just that of the Local using recurrent neural network. In this paper, we propose a novel graph neural network, named graph recurrent network.


Author(s):  
N.A. Yanishevskaya ◽  
◽  
I.P. Bolodurina ◽  

In the Russian Federation, the agro-industrial complex is one of the leading sectors of the eco-nomy with a volume of domestic product of 4.5%. Russia owns 10 % of all arable land in the world. According to the data on the sown areas by crops in 2020, most of the agricultural area of Russia is occupied by wheat. The Russian Federation ranks third in the ranking of leading countries in the production of this type of grain crops, as well as leading positions in its export. Brown (leaf) and linear (stem) rust is the most harmful disease of grain crops. It is the reason for the sparseness of wheat crops and leads to a sharp decrease in yield. Therefore, one of the main tasks of farmers is to preserve the crop from diseases. The application of such areas of artificial intelligence as computer vision, machine learning and deep learning is able to cope with this task. These artificial intelligence technologies allow us to successfully solve applied problems of the agro-industrial complex using automated analysis of photographic materials. Aim. To consider the application of computer vision methods for the problem of classification of lesions of cultivated plants on the example of wheat. Materials and methods. The CGIAR Computer Vision for Crop Disease dataset for the crop disease recognition task is taken from the open source Kaggle. It is proposed to use an approach to the re-cognition of lesions of cultivated plants using the well-known neural network models ResNet50, DenseNet169, VGG16 and EfficientNet-B0. Neural network models receive images of wheat as in-put. The output of neural networks is the class of plant damage. To overcome the effect of overfit-ting neural networks, various regularization techniques are investigated. Results. The results of the classification quality, estimated by the software using the F1-score metric, which is the average harmonic between the Precision and Recall measures, are presented. Conclusion. As a result of the conducted research, it was found that the DenseNet model showed the best recognition accuracy us-ing a combination of transfer learning technology and DropOut and L2 regulation technologies to overcome the effect of retraining. The use of this approach allowed us to achieve a recognition ac-curacy of 91%.


2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


Sign in / Sign up

Export Citation Format

Share Document