scholarly journals A Review of Machine Learning Classification Using Quantum Annealing for Real-World Applications

2021 ◽  
Vol 2 (5) ◽  
Author(s):  
Rajdeep Kumar Nath ◽  
Himanshu Thapliyal ◽  
Travis S. Humble
AI Magazine ◽  
2008 ◽  
Vol 29 (3) ◽  
pp. 93 ◽  
Author(s):  
Prithviraj Sen ◽  
Galileo Namata ◽  
Mustafa Bilgic ◽  
Lise Getoor ◽  
Brian Galligher ◽  
...  

Many real-world applications produce networked data such as the world-wide web (hypertext documents connected via hyperlinks), social networks (for example, people connected by friendship links), communication networks (computers connected via communication links) and biological networks (for example, protein interaction networks). A recent focus in machine learning research has been to extend traditional machine learning classification techniques to classify nodes in such networks. In this article, we provide a brief introduction to this area of research and how it has progressed during the past decade. We introduce four of the most widely used inference algorithms for classifying networked data and empirically compare them on both synthetic and real-world data.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Author(s):  
Andreas Christ Sølvsten Jørgensen ◽  
Atiyo Ghosh ◽  
Marc Sturrock ◽  
Vahid Shahrezaei

AbstractThe modelling of many real-world problems relies on computationally heavy simulations. Since statistical inference rests on repeated simulations to sample the parameter space, the high computational expense of these simulations can become a stumbling block. In this paper, we compare two ways to mitigate this issue based on machine learning methods. One approach is to construct lightweight surrogate models to substitute the simulations used in inference. Alternatively, one might altogether circumnavigate the need for Bayesian sampling schemes and directly estimate the posterior distribution. We focus on stochastic simulations that track autonomous agents and present two case studies of real-world applications: tumour growths and the spread of infectious diseases. We demonstrate that good accuracy in inference can be achieved with a relatively small number of simulations, making our machine learning approaches orders of magnitude faster than classical simulation-based methods that rely on sampling the parameter space. However, we find that while some methods generally produce more robust results than others, no algorithm offers a one-size-fits-all solution when attempting to infer model parameters from observations. Instead, one must choose the inference technique with the specific real-world application in mind. The stochastic nature of the considered real-world phenomena poses an additional challenge that can become insurmountable for some approaches. Overall, we find machine learning approaches that create direct inference machines to be promising for real-world applications. We present our findings as general guidelines for modelling practitioners.Author summaryComputer simulations play a vital role in modern science as they are commonly used to compare theory with observations. One can thus infer the properties of a observed system by comparing the data to the predicted behaviour in different scenarios. Each of these scenarios corresponds to a simulation with slightly different settings. However, since real-world problems are highly complex, the simulations often require extensive computational resources, making direct comparisons with data challenging, if not insurmountable. It is, therefore, necessary to resort to inference methods that mitigate this issue, but it is not clear-cut what path to choose for any specific research problem. In this paper, we provide general guidelines for how to make this choice. We do so by studying examples from oncology and epidemiology and by taking advantage of developments in machine learning. More specifically, we focus on simulations that track the behaviour of autonomous agents, such as single cells or individuals. We show that the best way forward is problem-dependent and highlight the methods that yield the most robust results across the different case studies. We demonstrate that these methods are highly promising and produce reliable results in a small fraction of the time required by classic approaches that rely on comparisons between data and individual simulations. Rather than relying on a single inference technique, we recommend employing several methods and selecting the most reliable based on predetermined criteria.


2021 ◽  
Author(s):  
Chih-Kuan Yeh ◽  
Been Kim ◽  
Pradeep Ravikumar

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image is important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.


Author(s):  
Wen Xu ◽  
Jing He ◽  
Yanfeng Shu

Transfer learning is an emerging technique in machine learning, by which we can solve a new task with the knowledge obtained from an old task in order to address the lack of labeled data. In particular deep domain adaptation (a branch of transfer learning) gets the most attention in recently published articles. The intuition behind this is that deep neural networks usually have a large capacity to learn representation from one dataset and part of the information can be further used for a new task. In this research, we firstly present the complete scenarios of transfer learning according to the domains and tasks. Secondly, we conduct a comprehensive survey related to deep domain adaptation and categorize the recent advances into three types based on implementing approaches: fine-tuning networks, adversarial domain adaptation, and sample-reconstruction approaches. Thirdly, we discuss the details of these methods and introduce some typical real-world applications. Finally, we conclude our work and explore some potential issues to be further addressed.


Kybernetes ◽  
2017 ◽  
Vol 46 (4) ◽  
pp. 693-705 ◽  
Author(s):  
Yasser F. Hassan

Purpose This paper aims to utilize machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications. Design/methodology/approach The objective of this work is to propose a model for deep rough set theory that uses more than decision table and approximating these tables to a classification system, i.e. the paper propose a novel framework of deep learning based on multi-decision tables. Findings The paper tries to coordinate the local properties of individual decision table to provide an appropriate global decision from the system. Research limitations/implications The rough set learning assumes the existence of a single decision table, whereas real-world decision problem implies several decisions with several different decision tables. The new proposed model can handle multi-decision tables. Practical implications The proposed classification model is implemented on social networks with preferred features which are freely distribute as social entities with accuracy around 91 per cent. Social implications The deep learning using rough sets theory simulate the way of brain thinking and can solve the problem of existence of different information about same problem in different decision systems Originality/value This paper utilizes machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.


Author(s):  
Chunsheng Yang ◽  
Yanni Zou ◽  
Jie Liu ◽  
Kyle R Mulligan

In the past decades, machine learning techniques or algorithms, particularly, classifiers have been widely applied to various real-world applications such as PHM. In developing high-performance classifiers, or machine learning-based models, i.e. predictive model for PHM, the predictive model evaluation remains a challenge. Generic methods such as accuracy may not fully meet the needs of models evaluation for prognostic applications. This paper addresses this issue from the point of view of PHM systems. Generic methods are first reviewed while outlining their limitations or deficiencies with respect to PHM. Then, two approaches developed for evaluating predictive models are presented with emphasis on specificities and requirements of PHM. A case of real prognostic application is studies to demonstrate the usefulness of two proposed methods for predictive model evaluation. We argue that predictive models for PHM must be evaluated not only using generic methods, but also domain-oriented approaches in order to deploy the models in real-world applications.


2018 ◽  
Vol 255 ◽  
pp. 1191-1210 ◽  
Author(s):  
S. De Vito ◽  
E. Esposito ◽  
M. Salvato ◽  
O. Popoola ◽  
F. Formisano ◽  
...  

2017 ◽  
Author(s):  
Michael Veale

Presented as a talk at the 4th Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML 2017), Halifax, Nova Scotia, Canada.Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground.


2020 ◽  
Author(s):  
Sathappan Muthiah ◽  
Debanjan Datta ◽  
Mohammad Raihanul Islam ◽  
Patrick Butler ◽  
Andrew Warren ◽  
...  

AbstractToxin classification of protein sequences is a challenging task with real world applications in healthcare and synthetic biology. Due to an ever expanding database of proteins and the inordinate cost of manual annotation, automated machine learning based approaches are crucial. Approaches need to overcome challenges of homology, multi-functionality, and structural diversity among proteins in this task. We propose a novel deep learning based method ProtTox, that aims to address some of the shortcomings of previous approaches in classifying proteins as toxins or not. Our method achieves a performance of 0.812 F1-score which is about 5% higher than the closest performing baseline.


Sign in / Sign up

Export Citation Format

Share Document