MMIR

2009 ◽  
pp. 1189-1204
Author(s):  
Min Chen ◽  
Shu-Ching Chen

This chapter introduces an advanced content-based image retrieval (CBIR) system, MMIR, where Markov model mediator (MMM) and multiple instance learning (MIL) techniques are integrated seamlessly and act coherently as a hierarchical learning engine to boost both the retrieval accuracy and efficiency. It is well-understood that the major bottleneck of CBIR systems is the large semantic gap between the low-level image features and the high-level semantic concepts. In addition, the perception subjectivity problem also challenges a CBIR system. To address these issues and challenges, the proposed MMIR system utilizes the MMM mechanism to direct the focus on the image level analysis together with the MIL technique (with the neural network technique as its core) to real-time capture and learn the objectlevel semantic concepts with some help of the user feedbacks. In addition, from a long-term learning perspective, the user feedback logs are explored by MMM to speed up the learning process and to increase the retrieval accuracy for a query. The comparative studies on a large set of real-world images demonstrate the promising performance of our proposed MMIR system.

Author(s):  
Min Chen ◽  
Shu-Ching Chen

This chapter introduces an advanced content-based image retrieval (CBIR) system, MMIR, where Markov model mediator (MMM) and multiple instance learning (MIL) techniques are integrated seamlessly and act coherently as a hierarchical learning engine to boost both the retrieval accuracy and efficiency. It is well-understood that the major bottleneck of CBIR systems is the large semantic gap between the low-level image features and the high-level semantic concepts. In addition, the perception subjectivity problem also challenges a CBIR system. To address these issues and challenges, the proposed MMIR system utilizes the MMM mechanism to direct the focus on the image level analysis together with the MIL technique (with the neural network technique as its core) to real-time capture and learn the object-level semantic concepts with some help of the user feedbacks. In addition, from a long-term learning perspective, the user feedback logs are explored by MMM to speed up the learning process and to increase the retrieval accuracy for a query. The comparative studies on a large set of real-world images demonstrate the promising performance of our proposed MMIR system.


Author(s):  
Min Chen ◽  
Shu-Ching Chen

This chapter introduces an advanced content-based image retrieval (CBIR) system, MMIR, where Markov model mediator (MMM) and multiple instance learning (MIL) techniques are integrated seamlessly and act coherently as a hierarchical learning engine to boost both the retrieval accuracy and efficiency. It is well-understood that the major bottleneck of CBIR systems is the large semantic gap between the low-level image features and the highlevel semantic concepts. In addition, the perception subjectivity problem also challenges a CBIR system. To address these issues and challenges, the proposed MMIR system utilizes the MMM mechanism to direct the focus on the image level analysis together with the MIL technique (with the neural network technique as its core) to real-time capture and learn the object-level semantic concepts with some help of the user feedbacks. In addition, from a long-term learning perspective, the user feedback logs are explored by MMM to speed up the learning process and to increase the retrieval accuracy for a query. The comparative studies on a large set of real-world images demonstrate the promising performance of our proposed MMIR system.


2015 ◽  
Vol 113 (2) ◽  
pp. 416-421 ◽  
Author(s):  
Adrian Nestor ◽  
David C. Plaut ◽  
Marlene Behrmann

The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach.


Author(s):  
Mohammed Muayad Abdulrazzaq ◽  
Imad FT Yaseen ◽  
SA Noah ◽  
Moayad A. Fadhil

There has been a rise in demand for digitized medical images over the last two decades. Medical images' pivotal role in surgical planning is also an essential source of information for diseases and as medical reference as well as for the purpose of research and training. Therefore, effective techniques for medical image retrieval and classification are required to provide accurate search through substantial amount of images in a timely manner. Given the amount of images that are required to deal with, it is a non-viable practice to manually annotate these medical images. Additionally, retrieving and indexing them with image visual feature cannot capture high level of semantic concepts, which are necessary for accurate retrieval and effective classification of medical images. Therefore, an automatic mechanism is required to address these limitations. Addressing this, this study formulated an effective classification for X-ray medical images using different feature extractions and classification techniques. Specifically, this study proposed pertinent feature extraction algorithm for X-ray medical images and determined machine learning methods for automatic X-ray medical image classification. This study also evaluated different image features (chiefly global, local, and combined) and classifiers. Consequently, the obtained results from this study improved results obtained from previous related studies.


Author(s):  
Nicolas Bougie ◽  
Ryutaro Ichise

AbstractRecent success in scaling deep reinforcement algorithms (DRL) to complex problems has been driven by well-designed extrinsic rewards, which limits their applicability to many real-world tasks where rewards are naturally extremely sparse. One solution to this problem is to introduce human guidance to drive the agent’s learning. Although low-level demonstrations is a promising approach, it was shown that such guidance may be difficult for experts to demonstrate since some tasks require a large amount of high-quality demonstrations. In this work, we explore human guidance in the form of high-level preferences between sub-goals, leading to drastic reductions in both human effort and cost of exploration. We design a novel hierarchical reinforcement learning method that introduces non-expert human preferences at the high-level, and curiosity to drastically speed up the convergence of subpolicies to reach any sub-goals. We further propose a strategy based on curiosity to automatically discover sub-goals. We evaluate the proposed method on 2D navigation tasks, robotic control tasks, and image-based video games (Atari 2600), which have high-dimensional observations, sparse rewards, and complex state dynamics. The experimental results show that the proposed method can learn significantly faster than traditional hierarchical RL methods and drastically reduces the amount of human effort required over standard imitation learning approaches.


Medicina ◽  
2021 ◽  
Vol 57 (6) ◽  
pp. 527
Author(s):  
Vijay Vyas Vadhiraj ◽  
Andrew Simpkin ◽  
James O’Connell ◽  
Naykky Singh Singh Ospina ◽  
Spyridoula Maraka ◽  
...  

Background and Objectives: Thyroid nodules are lumps of solid or liquid-filled tumors that form inside the thyroid gland, which can be malignant or benign. Our aim was to test whether the described features of the Thyroid Imaging Reporting and Data System (TI-RADS) could improve radiologists’ decision making when integrated into a computer system. In this study, we developed a computer-aided diagnosis system integrated into multiple-instance learning (MIL) that would focus on benign–malignant classification. Data were available from the Universidad Nacional de Colombia. Materials and Methods: There were 99 cases (33 Benign and 66 malignant). In this study, the median filter and image binarization were used for image pre-processing and segmentation. The grey level co-occurrence matrix (GLCM) was used to extract seven ultrasound image features. These data were divided into 87% training and 13% validation sets. We compared the support vector machine (SVM) and artificial neural network (ANN) classification algorithms based on their accuracy score, sensitivity, and specificity. The outcome measure was whether the thyroid nodule was benign or malignant. We also developed a graphic user interface (GUI) to display the image features that would help radiologists with decision making. Results: ANN and SVM achieved an accuracy of 75% and 96% respectively. SVM outperformed all the other models on all performance metrics, achieving higher accuracy, sensitivity, and specificity score. Conclusions: Our study suggests promising results from MIL in thyroid cancer detection. Further testing with external data is required before our classification model can be employed in practice.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


2014 ◽  
Vol 129 ◽  
pp. 504-515 ◽  
Author(s):  
Liming Yuan ◽  
Jiafeng Liu ◽  
Xianglong Tang

Author(s):  
Bo Wang ◽  
Xiaoting Yu ◽  
Chengeng Huang ◽  
Qinghong Sheng ◽  
Yuanyuan Wang ◽  
...  

The excellent feature extraction ability of deep convolutional neural networks (DCNNs) has been demonstrated in many image processing tasks, by which image classification can achieve high accuracy with only raw input images. However, the specific image features that influence the classification results are not readily determinable and what lies behind the predictions is unclear. This study proposes a method combining the Sobel and Canny operators and an Inception module for ship classification. The Sobel and Canny operators obtain enhanced edge features from the input images. A convolutional layer is replaced with the Inception module, which can automatically select the proper convolution kernel for ship objects in different image regions. The principle is that the high-level features abstracted by the DCNN, and the features obtained by multi-convolution concatenation of the Inception module must ultimately derive from the edge information of the preprocessing input images. This indicates that the classification results are based on the input edge features, which indirectly interpret the classification results to some extent. Experimental results show that the combination of the edge features and the Inception module improves DCNN ship classification performance. The original model with the raw dataset has an average accuracy of 88.72%, while when using enhanced edge features as input, it achieves the best performance of 90.54% among all models. The model that replaces the fifth convolutional layer with the Inception module has the best performance of 89.50%. It performs close to VGG-16 on the raw dataset and is significantly better than other deep neural networks. The results validate the functionality and feasibility of the idea posited.


Sign in / Sign up

Export Citation Format

Share Document