Quantum-Computing-Inspired Algorithms in Machine Learning

Author(s):  
Deeksha Kaul ◽  
Harika Raju ◽  
B. K. Tripathy

In this chapter, the authors discuss the use of quantum computing concepts to optimize the decision-making capability of classical machine learning algorithms. Machine learning, a subfield of artificial intelligence, implements various techniques to train a computer to learn and adapt to various real-time tasks. With the volume of data exponentially increasing, solving the same problems using classical algorithms becomes more tedious and time consuming. Quantum computing has varied applications in many areas of computer science. One such area which has been transformed a lot through the introduction of quantum computing is machine learning. Quantum computing, with its ability to perform tasks in logarithmic time, aids in overcoming the limitations of classical machine learning algorithms.

Author(s):  
Deeksha Kaul ◽  
Harika Raju ◽  
B. K. Tripathy

In this chapter, the authors discuss the use of quantum computing concepts to optimize the decision-making capability of classical machine learning algorithms. Machine learning, a subfield of artificial intelligence, implements various techniques to train a computer to learn and adapt to various real-time tasks. With the volume of data exponentially increasing, solving the same problems using classical algorithms becomes more tedious and time consuming. Quantum computing has varied applications in many areas of computer science. One such area which has been transformed a lot through the introduction of quantum computing is machine learning. Quantum computing, with its ability to perform tasks in logarithmic time, aids in overcoming the limitations of classical machine learning algorithms.


2020 ◽  
Vol 23 (5) ◽  
pp. 1044-1057
Author(s):  
Leonid Nikolaevich Parenyuk ◽  
Vlada Vladimirovna Kugurakova

There are various approaches for creating artificial intelligence in games, and each has both advantages and disadvantages. This study describes an authoring implementation of the NPC behavior task using machine learning algorithms that will be associated with the Unity environment in real time. This approach can be used in game development.


2021 ◽  
Vol 29 (Supplement_1) ◽  
pp. i18-i18
Author(s):  
N Hassan ◽  
R Slight ◽  
D Weiand ◽  
A Vellinga ◽  
G Morgan ◽  
...  

Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ashwin A. Phatak ◽  
Franz-Georg Wieland ◽  
Kartik Vempala ◽  
Frederik Volkmar ◽  
Daniel Memmert

AbstractWith the rising amount of data in the sports and health sectors, a plethora of applications using big data mining have become possible. Multiple frameworks have been proposed to mine, store, preprocess, and analyze physiological vitals data using artificial intelligence and machine learning algorithms. Comparatively, less research has been done to collect potentially high volume, high-quality ‘big data’ in an organized, time-synchronized, and holistic manner to solve similar problems in multiple fields. Although a large number of data collection devices exist in the form of sensors. They are either highly specialized, univariate and fragmented in nature or exist in a lab setting. The current study aims to propose artificial intelligence-based body sensor network framework (AIBSNF), a framework for strategic use of body sensor networks (BSN), which combines with real-time location system (RTLS) and wearable biosensors to collect multivariate, low noise, and high-fidelity data. This facilitates gathering of time-synchronized location and physiological vitals data, which allows artificial intelligence and machine learning (AI/ML)-based time series analysis. The study gives a brief overview of wearable sensor technology, RTLS, and provides use cases of AI/ML algorithms in the field of sensor fusion. The study also elaborates sample scenarios using a specific sensor network consisting of pressure sensors (insoles), accelerometers, gyroscopes, ECG, EMG, and RTLS position detectors for particular applications in the field of health care and sports. The AIBSNF may provide a solid blueprint for conducting research and development, forming a smooth end-to-end pipeline from data collection using BSN, RTLS and final stage analytics based on AI/ML algorithms.


2021 ◽  
Author(s):  
Jorge Crespo Alvarez ◽  
Bryan Ferreira Hernández ◽  
Sandra Sumalla Cano

This work, developed under the NUTRIX Project, has the objective to develop artificial intelligence algorithms based on the open source platform Knime that allows to characterize and predict the adherence of individuals to diet before starting the treatment. The machine learning algorithms developed under this project have significantly increased the confidence (a priory probability) that a patient leaves the treatment (diet) before starting: from 17,6% up to 96,5% which can be used as valuable guidance during the decision-making process of professionals in the area of ​dietetics and nutrition.


Intelligent technology has touched and improved upon almost every aspect of employee life cycle, Human resource is one of the areas, which has greatly benefited. Transformation of work mainly question the way we work, where we work, how we work and mainly care about the environment and surroundings in which we work. The main goal is to support the organizations to break out their traditional way of work and further move towards to an environment, which brings more pleasing atmosphere, flexible, empowering and communicative. Machine learning, algorithms and artificial intelligence are the latest technology buzzing around the HR professional minds. Artificial intelligence designed to take decisions based on data fed into the programs. The key difference between rhythm and balance is of choice vs adjustment. The choice is made easier, only with the help of priority, quick decision-making, time and communication. To maintain the above scenario digitalisation plays a vital role. In this paper, we suggest the artificial assistants focus on improving the rhythm of individual


Author(s):  
Abdullah Saif Mohammed Al Qassabi ◽  
C. Jayakumari

Check-in area in airports is the first step before departure. Usually these areas are very crowded by the passengers who are very sensitive about the time before their flight check-in. Thus, delays while check-in process will have an impact on the passengers in which will have a consequences on the airline company. Delays in check-in process can happen due to various kind of reasons such as airline agent absence where the airline agent is not available at the check-in desk when the time of check-in is started. Artificial intelligence and machine learning are a computer science technologies that can help preventing such problems from occurring. In this paper the machine learning algorithms are studied to detect the staff availability in the check-in desk. The staff is detected by a camera using facial recognition algorithm of machine learning and then analysis are made upon the results found. Whenever there is no staff found the system will notify the airline company and send an expected delay time. The implementation of this solution can be built as a smart based system i.e. based on artificial intelligence and machine learning concepts. All solutions suggested can reduce cost and effect on airline companies and passengers rates respectively.


Author(s):  
M. A. Fesenko ◽  
G. V. Golovaneva ◽  
A. V. Miskevich

The new model «Prognosis of men’ reproductive function disorders» was developed. The machine learning algorithms (artificial intelligence) was used for this purpose, the model has high prognosis accuracy. The aim of the model applying is prioritize diagnostic and preventive measures to minimize reproductive system diseases complications and preserve workers’ health and efficiency.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


2020 ◽  
Vol 237 (12) ◽  
pp. 1430-1437
Author(s):  
Achim Langenbucher ◽  
Nóra Szentmáry ◽  
Jascha Wendelstein ◽  
Peter Hoffmann

Abstract Background and Purpose In the last decade, artificial intelligence and machine learning algorithms have been more and more established for the screening and detection of diseases and pathologies, as well as for describing interactions between measures where classical methods are too complex or fail. The purpose of this paper is to model the measured postoperative position of an intraocular lens implant after cataract surgery, based on preoperatively assessed biometric effect sizes using techniques of machine learning. Patients and Methods In this study, we enrolled 249 eyes of patients who underwent elective cataract surgery at Augenklinik Castrop-Rauxel. Eyes were measured preoperatively with the IOLMaster 700 (Carl Zeiss Meditec), as well as preoperatively and postoperatively with the Casia 2 OCT (Tomey). Based on preoperative effect sizes axial length, corneal thickness, internal anterior chamber depth, thickness of the crystalline lens, mean corneal radius and corneal diameter a selection of 17 machine learning algorithms were tested for prediction performance for calculation of internal anterior chamber depth (AQD_post) and axial position of equatorial plane of the lens in the pseudophakic eye (LEQ_post). Results The 17 machine learning algorithms (out of 4 families) varied in root mean squared/mean absolute prediction error between 0.187/0.139 mm and 0.255/0.204 mm (AQD_post) and 0.183/0.135 mm and 0.253/0.206 mm (LEQ_post), using 5-fold cross validation techniques. The Gaussian Process Regression Model using an exponential kernel showed the best performance in terms of root mean squared error for prediction of AQDpost and LEQpost. If the entire dataset is used (without splitting for training and validation data), comparison of a simple multivariate linear regression model vs. the algorithm with the best performance showed a root mean squared prediction error for AQD_post/LEQ_post with 0.188/0.187 mm vs. the best performance Gaussian Process Regression Model with 0.166/0.159 mm. Conclusion In this paper we wanted to show the principles of supervised machine learning applied to prediction of the measured physical postoperative axial position of the intraocular lenses. Based on our limited data pool and the algorithms used in our setting, the benefit of machine learning algorithms seems to be limited compared to a standard multivariate regression model.


Sign in / Sign up

Export Citation Format

Share Document