Knowledge Transformations Applied in Image Classification Task

Author(s):  
Krzysztof Wójcik
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kinshuk Sengupta ◽  
Praveen Ranjan Srivastava

Abstract Background In medical diagnosis and clinical practice, diagnosing a disease early is crucial for accurate treatment, lessening the stress on the healthcare system. In medical imaging research, image processing techniques tend to be vital in analyzing and resolving diseases with a high degree of accuracy. This paper establishes a new image classification and segmentation method through simulation techniques, conducted over images of COVID-19 patients in India, introducing the use of Quantum Machine Learning (QML) in medical practice. Methods This study establishes a prototype model for classifying COVID-19, comparing it with non-COVID pneumonia signals in Computed tomography (CT) images. The simulation work evaluates the usage of quantum machine learning algorithms, while assessing the efficacy for deep learning models for image classification problems, and thereby establishes performance quality that is required for improved prediction rate when dealing with complex clinical image data exhibiting high biases. Results The study considers a novel algorithmic implementation leveraging quantum neural network (QNN). The proposed model outperformed the conventional deep learning models for specific classification task. The performance was evident because of the efficiency of quantum simulation and faster convergence property solving for an optimization problem for network training particularly for large-scale biased image classification task. The model run-time observed on quantum optimized hardware was 52 min, while on K80 GPU hardware it was 1 h 30 min for similar sample size. The simulation shows that QNN outperforms DNN, CNN, 2D CNN by more than 2.92% in gain in accuracy measure with an average recall of around 97.7%. Conclusion The results suggest that quantum neural networks outperform in COVID-19 traits’ classification task, comparing to deep learning w.r.t model efficacy and training time. However, a further study needs to be conducted to evaluate implementation scenarios by integrating the model within medical devices.


2012 ◽  
Vol 21 (02) ◽  
pp. 1240003
Author(s):  
MOHAMMAD KHAZAB ◽  
DAN-NI AI ◽  
JEFFREY TWEEDALE ◽  
YEN-WEI CHEN ◽  
LAKHMI JAIN

This paper discusses the research conducted on developing a Multi-Agent System (MAS) for solving an image classification task. The aim of this research is to equip agents in MAS with reusable autonomous capabilities. The system provides a flexible framework for developing the communication aspects within an agent-oriented architecture to program agents that dynamically acquire functionality at runtime using event based messaging. In this research agents are equipped with unique image processing capabilities and required to interact and cooperate to achieve the goal. Complementary research on a variety of agent tools (specifically JACK, JADE and CIAgent) and communication languages (ACL, KQML, FIPA and SOAP) has been reviewed to glean knowledge that enables these agents to adapt those capabilities. The system has generated encouraging results.


Author(s):  
Wenqi Zhao ◽  
Satoshi Oyama ◽  
Masahito Kurihara

Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfactual images on the basis of the modified text. We can use the prediction results of the model on counterfactual images to find the attributes that have the greatest effect when the model is predicting classes A and B. We applied our method to a fine-grained image classification dataset and used the generative adversarial network to generate natural counterfactual visual explanations. To evaluate these explanations, we used them to assist crowdsourcing workers in an image classification task. We found that, within a specific range, they improved classification accuracy.


2019 ◽  
Vol 111 ◽  
pp. 148-154 ◽  
Author(s):  
Titus J. Brinker ◽  
Achim Hekler ◽  
Alexander H. Enk ◽  
Joachim Klode ◽  
Axel Hauschild ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document