scholarly journals COMPUTER VISION FOR HUMAN-MACHINE INTERACTION-REVIEW

Author(s):  
Dr. Suma V.

The paper is a review on the computer vision that is helpful in the interaction between the human and the machines. The computer vision that is termed as the subfield of the artificial intelligence and the machine learning is capable of training the computer to visualize, interpret and respond back to the visual world in a similar way as the human vision does. Nowadays the computer vision has found its application in broader areas such as the heath care, safety security, surveillance etc. due to the progress, developments and latest innovations in the artificial intelligence, deep learning and neural networks. The paper presents the enhanced capabilities of the computer vision experienced in various applications related to the interactions between the human and machines involving the artificial intelligence, deep learning and the neural networks.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


Author(s):  
Ekaterina Artemova

AbstractDeep learning is a term used to describe artificial intelligence (AI) technologies. AI deals with how computers can be used to solve complex problems in the same way that humans do. Such technologies as computer vision (CV) and natural language processing (NLP) are distinguished as the largest AI areas. To imitate human vision and the ability to express meaning and feelings through language, deep learning exploits artificial neural networks that are trained on real life evidence.While most vision-related tasks are solved using common methods nearly irrespective of target domains, NLP methods strongly depend on the properties of a given language. Linguistic diversity complicates deep learning for NLP. This chapter focuses on deep learning applications to processing the Russian language.


2021 ◽  
Vol 11 (11) ◽  
pp. 1213
Author(s):  
Morteza Esmaeili ◽  
Riyas Vettukattil ◽  
Hasan Banitalebi ◽  
Nina R. Krogh ◽  
Jonn Terje Geitung

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.


Author(s):  
Tanya Tiwari ◽  
Tanuj Tiwari ◽  
Sanjay Tiwari

There is a lot of confusion these days about Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL). A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial Intelligence has made it possible. Deep learning is a subset of machine learning, and machine learning is a subset of AI, which is an umbrella term for any computer program that does something smart. In other words, all machine learning is AI, but not all AI is machine learning, and so forth. Machine Learning represents a key evolution in the fields of computer science, data analysis, software engineering, and artificial intelligence. Machine learning (ML)is a vibrant field of research, with a range of exciting areas for further development across different methods and applications. These areas include algorithmic interpretability, robustness, privacy, fairness, inference of causality, human-machine interaction, and security. The goal of ML is never to make “perfect” guesses, because ML deals in domains where there is no such thing. The goal is to make guesses that are good enough to be useful. Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones. This paper gives an overview of artificial intelligence, machine learning & deep learning techniques and compare these techniques.


2021 ◽  
pp. PP. 18-50
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Computer vision is one of the fields of computer science that is one of the most powerful and persuasive types of artificial intelligence. It is similar to the human vision system, as it enables computers to recognize and process objects in pictures and videos in the same way as humans do. Computer vision technology has rapidly evolved in many fields and contributed to solving many problems, as computer vision contributed to self-driving cars, and cars were able to understand their surroundings. The cameras record video from different angles around the car, then a computer vision system gets images from the video, and then processes the images in real-time to find roadside ends, detect other cars, and read traffic lights, pedestrians, and objects. Computer vision also contributed to facial recognition; this technology enables computers to match images of people’s faces to their identities. which these algorithms detect facial features in images and then compare them with databases. Computer vision also play important role in Healthcare, in which algorithms can help automate tasks such as detecting Breast cancer, finding symptoms in x-ray, cancerous moles in skin images, and MRI scans. Computer vision also contributed to many fields such as image classification, object discovery, motion recognition, subject tracking, and medicine. The rapid development of artificial intelligence is making machine learning more important in his field of research. Use algorithms to find out every bit of data and predict the outcome. This has become an important key to unlocking the door to AI. If we had looked to deep learning concept, we find deep learning is a subset of machine learning, algorithms inspired by structure and function of the human brain called artificial neural networks, learn from large amounts of data. Deep learning algorithm perform a task repeatedly, each time tweak it a little to improve the outcome. So, the development of computer vision was due to deep learning. Now we'll take a tour around the convolution neural networks, let us say that convolutional neural networks are one of the most powerful supervised deep learning models (abbreviated as CNN or ConvNet). This name ;convolutional ; is a token from a mathematical linear operation between matrixes called convolution. CNN structure can be used in a variety of real-world problems including, computer vision, image recognition, natural language processing (NLP), anomaly detection, video analysis, drug discovery, recommender systems, health risk assessment, and time-series forecasting. If we look at convolutional neural networks, we see that CNN are similar to normal neural networks, the only difference between CNN and ANN is that CNNs are used in the field of pattern recognition within images mainly. This allows us to encode the features of an image into the structure, making the network more suitable for image-focused tasks, with reducing the parameters required to set-up the model. One of the advantages of CNN that it has an excellent performance in machine learning problems. So, we will use CNN as a classifier for image classification. So, the objective of this paper is that we will talk in detail about image classification in the following sections.


2020 ◽  
Vol 7 (1) ◽  
pp. 2-3
Author(s):  
Shadi Saleh

Deep learning and machine learning innovations are at the core of the ongoing revolution in Artificial Intelligence for the interpretation and analysis of multimedia data. The convergence of large-scale datasets and more affordable Graphics Processing Unit (GPU) hardware has enabled the development of neural networks for data analysis problems that were previously handled by traditional handcrafted features. Several deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM)/Gated Recurrent Unit (GRU), Deep Believe Networks (DBN), and Deep Stacking Networks (DSNs) have been used with new open source software and libraries options to shape an entirely new scenario in computer vision processing.


Author(s):  
Saad Sadiq ◽  
Mei-Ling Shyu ◽  
Daniel J. Feaster

Deep Neural Networks (DNNs) are best known for being the state-of-the-art in artificial intelligence (AI) applications including natural language processing (NLP), speech processing, computer vision, etc. In spite of all recent achievements of deep learning, it has yet to achieve semantic learning required to reason about the data. This lack of reasoning is partially imputed to the boorish memorization of patterns and curves from millions of training samples and ignoring the spatiotemporal relationships. The proposed framework puts forward a novel approach based on variational autoencoders (VAEs) by using the potential outcomes model and developing the counterfactual autoencoders. The proposed framework transforms any sort of multimedia input distributions to a meaningful latent space while giving more control over how the latent space is created. This allows us to model data that is better suited to answer inference-based queries, which is very valuable in reasoning-based AI applications.


2020 ◽  
Vol 17 (168) ◽  
pp. 20200446 ◽  
Author(s):  
Blanca Jiménez-García ◽  
José Aznarte ◽  
Natalia Abellán ◽  
Enrique Baquedano ◽  
Manuel Domínguez-Rodrigo

Taphonomists have long struggled with identifying carnivore agency in bone accumulation and modification. Now that several taphonomic techniques allow identifying carnivore modification of bones, a next step involves determining carnivore type. This is of utmost importance to determine which carnivores were preying on and competing with hominins and what types of interaction existed among them during prehistory. Computer vision techniques using deep architectures of convolutional neural networks (CNN) have enabled significantly higher resolution in the identification of bone surface modifications (BSM) than previous methods. Here, we apply these techniques to test the hypothesis that different carnivores create specific BSM that can enable their identification. To make differentiation more challenging, we selected two types of carnivores (lions and jaguars) that belong to the same mammal family and have similar dental morphology. We hypothesize that if two similar carnivores can be identified by the BSM they imprint on bones, then two more distinctive carnivores (e.g. hyenids and felids) should be more easily distinguished. The CNN method used here shows that tooth scores from both types of felids can be successfully classified with an accuracy greater than 82%. The first hypothesis was successfully tested. The next step will be to differentiate diverse carnivore types involving a wider range of carnivore-made BSM. The present study demonstrates that resolution increases when combining two different disciplines (taphonomy and artificial intelligence computing) in order to test new hypotheses that could not be addressed with traditional taphonomic methods.


Healthcare ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 834
Author(s):  
Magbool Alelyani ◽  
Sultan Alamri ◽  
Mohammed S. Alqahtani ◽  
Alamin Musa ◽  
Hajar Almater ◽  
...  

Artificial intelligence (AI) is a broad, umbrella term that encompasses the theory and development of computer systems able to perform tasks normally requiring human intelligence. The aim of this study is to assess the radiology community’s attitude in Saudi Arabia toward the applications of AI. Methods: Data for this study were collected using electronic questionnaires in 2019 and 2020. The study included a total of 714 participants. Data analysis was performed using SPSS Statistics (version 25). Results: The majority of the participants (61.2%) had read or heard about the role of AI in radiology. We also found that radiologists had statistically different responses and tended to read more about AI compared to all other specialists. In addition, 82% of the participants thought that AI must be included in the curriculum of medical and allied health colleges, and 86% of the participants agreed that AI would be essential in the future. Even though human–machine interaction was considered to be one of the most important skills in the future, 89% of the participants thought that it would never replace radiologists. Conclusion: Because AI plays a vital role in radiology, it is important to ensure that radiologists and radiographers have at least a minimum understanding of the technology. Our finding shows an acceptable level of knowledge regarding AI technology and that AI applications should be included in the curriculum of the medical and health sciences colleges.


Sign in / Sign up

Export Citation Format

Share Document