scholarly journals Deep Learning for Surface Material Classification Using Haptic and Visual Information

2016 ◽  
Vol 18 (12) ◽  
pp. 2407-2416 ◽  
Author(s):  
Haitian Zheng ◽  
Lu Fang ◽  
Mengqi Ji ◽  
Matti Strese ◽  
Yigitcan Ozer ◽  
...  
2021 ◽  
pp. 49-58
Author(s):  
Naveeja Sajeevan ◽  
M. Arathi Nair ◽  
R. Aravind Sekhar ◽  
K. G. Sreeni

Author(s):  
Silvia Uribe ◽  
Alberto Belmonte ◽  
Francisco Moreno ◽  
Álvaro Llorente ◽  
Juan Pedro López ◽  
...  

AbstractUniversal access on equal terms to audiovisual content is a key point for the full inclusion of people with disabilities in activities of daily life. As a real challenge for the current Information Society, it has been detected but not achieved in an efficient way, due to the fact that current access solutions are mainly based in the traditional television standard and other not automated high-cost solutions. The arrival of new technologies within the hybrid television environment together with the application of different artificial intelligence techniques over the content will assure the deployment of innovative solutions for enhancing the user experience for all. In this paper, a set of different tools for image enhancement based on the combination between deep learning and computer vision algorithms will be presented. These tools will provide automatic descriptive information of the media content based on face detection for magnification and character identification. The fusion of this information will be finally used to provide a customizable description of the visual information with the aim of improving the accessibility level of the content, allowing an efficient and reduced cost solution for all.


Author(s):  
Yoichiro Maeda ◽  
Kotaro Sano ◽  
Eric W. Cooper ◽  
Katsuari Kamei ◽  
◽  
...  

In recent years, much research on the unmanned control of a moving vehicle has been conducted, and various robots and motor vehicles moving automatically are being used. However, the more complicated the environment is, the more difficult it is for the autonomous vehicles to move automatically. Even in such a challenging environment, however, an expert with the necessary operation skill can sometimes perform the appropriate control of the moving vehicle. In this research, a method for learning a human’s operation skill using a convolutional neural network (CNN) and setting visual information for input is proposed for learning more complicated environmental information. A CNN is a kind of deep-learning network, and it exhibits high performance in the field of image recognition. In this experiment, the operation knowledge was also visualized using a fuzzy neural network with obtained input-output maps to create fuzzy rules. To verify the effectiveness of this method, an experiment involving operation skill acquisition by some subjects using a drone control simulator was conducted.


Author(s):  
Shun Otsubo ◽  
Yasutake Takahashi ◽  
Masaki Haruna ◽  
◽  

This paper proposes an automatic driving system based on a combination of modular neural networks processing human driving data. Research on automatic driving vehicles has been actively conducted in recent years. Machine learning techniques are often utilized to realize an automatic driving system capable of imitating human driving operations. Almost all of them adopt a large monolithic learning module, as typified by deep learning. However, it is inefficient to use a monolithic deep learning module to learn human driving operations (accelerating, braking, and steering) using the visual information obtained from a human driving a vehicle. We propose combining a series of modular neural networks that independently learn visual feature quantities, routes, and driving maneuvers from human driving data, thereby imitating human driving operations and efficiently learning a plurality of routes. This paper demonstrates the effectiveness of the proposed method through experiments using a small vehicle.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3820 ◽  
Author(s):  
Jiaxing Ye ◽  
Shunya Ito ◽  
Nobuyuki Toyama

For many decades, ultrasonic imaging inspection has been adopted as a principal method to detect multiple defects, e.g., void and corrosion. However, the data interpretation relies on an inspector’s subjective judgment, thus making the results vulnerable to human error. Nowadays, advanced computer vision techniques reveal new perspectives on the high-level visual understanding of universal tasks. This research aims to develop an efficient automatic ultrasonic image analysis system for nondestructive testing (NDT) using the latest visual information processing technique. To this end, we first established an ultrasonic inspection image dataset containing 6849 ultrasonic scan images with full defect/no-defect annotations. Using the dataset, we performed a comprehensive experimental comparison of various computer vision techniques, including both conventional methods using hand-crafted visual features and the most recent convolutional neural networks (CNN) which generate multiple-layer stacking for representation learning. In the computer vision community, the two groups are referred to as shallow and deep learning, respectively. Experimental results make it clear that the deep learning-enabled system outperformed conventional (shallow) learning schemes by a large margin. We believe this benchmarking could be used as a reference for similar research dealing with automatic defect detection in ultrasonic imaging inspection.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Na Li ◽  
Xinbo Zhao ◽  
Yongjia Yang ◽  
Xiaochun Zou

Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.


Author(s):  
Punyanuch Borwarnginn ◽  
Worapan Kusakunniran ◽  
Parintorn Pooyoi ◽  
Jason H. Haga

Recently, data from many sensors has been used in a disaster monitoring of things, such as river wa- ter levels, rainfall levels, and snowfall levels. These types of numeric data can be straightforwardly used in a further analysis. In contrast, data from CCTV cameras (i.e. images and/or videos) cannot be easily interpreted for users in an automatic way. In a tra- ditional way, it is only provided to users for a visual- ization without any meaningful interpretation. Users must rely on their own expertise and experience to interpret such visual information. Thus, this paper proposes the CNN-based method to automatically in- terpret images captured from CCTV cameras, by us- ing snow scene segmentation as a case example. The CNN models are trained to work on 3 classes: snow, non-snow and non-ground. The non-ground class is explicitly learned, in order to avoid a confusion of the models in differentiating snow pixels from non- ground pixels, e.g. sky regions. The VGG-19 with pre-trained weights is retrained using manually la- beled snow, non-snow and non-ground samples. The learned models achieve up to 85% sensitivity and 97% specificity of the snow area segmentation.


Sign in / Sign up

Export Citation Format

Share Document