scholarly journals Quantum Go Machine

2020 ◽  
Author(s):  
Lu-Feng Qiao ◽  
Jun Gao ◽  
Zhi-Qiang Jiao ◽  
Zhe-Yong Zhang ◽  
Zhu Cao ◽  
...  

Abstract Go has long been considered as a testbed for artificial intelligence. By introducing certain quantum features, such as superposition and collapse of wavefunction, we experimentally demonstrate a quantum version of Go by using correlated photon pairs entangled in polarization degree of freedom. The total dimension of Hilbert space of the generated states grows exponentially as two players take turns to place the stones in time series. As nondeterministic and imperfect information games are more difficult to solve using nowadays technology, we excitedly find that the inherent randomness in quantum physics can bring the game nondeterministic trait, which does not exist in the classical counterpart. Some quantum resources, like coherence or entanglement, can also be encoded to represent the state of quantum stones. Adjusting the quantum resource may vary the average imperfect information (as comparison classical Go is a perfect information game) of a single game. We further verify its non-deterministic feature by showing the unpredictability of the time series data obtained from different classes of quantum state. Finally, by comparing quantum Go with a few typical games that are widely studied in artificial intelligence, we find that quantum Go can cover a wide range of game difficulties rather than a single point. Our results establish a paradigm of inventing new games with quantum-enabled difficulties by harnessing inherent quantum features and resources, and provide a versatile platform for the test of new algorithms to both classical and quantum machine learning.

2020 ◽  
Vol 109 (11) ◽  
pp. 2029-2061
Author(s):  
Zahraa S. Abdallah ◽  
Mohamed Medhat Gaber

Abstract Time series classification (TSC) is a challenging task that attracted many researchers in the last few years. One main challenge in TSC is the diversity of domains where time series data come from. Thus, there is no “one model that fits all” in TSC. Some algorithms are very accurate in classifying a specific type of time series when the whole series is considered, while some only target the existence/non-existence of specific patterns/shapelets. Yet other techniques focus on the frequency of occurrences of discriminating patterns/features. This paper presents a new classification technique that addresses the inherent diversity problem in TSC using a nature-inspired method. The technique is stimulated by how flies look at the world through “compound eyes” that are made up of thousands of lenses, called ommatidia. Each ommatidium is an eye with its own lens, and thousands of them together create a broad field of vision. The developed technique similarly uses different lenses and representations to look at the time series, and then combines them for broader visibility. These lenses have been created through hyper-parameterisation of symbolic representations (Piecewise Aggregate and Fourier approximations). The algorithm builds a random forest for each lens, then performs soft dynamic voting for classifying new instances using the most confident eyes, i.e., forests. We evaluate the new technique, coined Co-eye, using the recently released extended version of UCR archive, containing more than 100 datasets across a wide range of domains. The results show the benefits of bringing together different perspectives reflecting on the accuracy and robustness of Co-eye in comparison to other state-of-the-art techniques.


2007 ◽  
Vol 23 (4) ◽  
pp. 227-237 ◽  
Author(s):  
Thomas Kubiak ◽  
Cornelia Jonas

Abstract. Patterns of psychological variables in time have been of interest to research from the beginning. This is particularly true for ambulatory monitoring research, where large (cross-sectional) time-series datasets are often the matter of investigation. Common methods for identifying cyclic variations include spectral analyses of time-series data or time-domain based strategies, which also allow for modeling cyclic components. Though the prerequisites of these sophisticated procedures, such as interval-scaled time-series variables, are seldom met, their usage is common. In contrast to the time-series approach, methods from a different field of statistics, directional or circular statistics, offer another opportunity for the detection of patterns in time, where fewer prerequisites have to be met. These approaches are commonly used in biology or geostatistics. They offer a wide range of analytical strategies to examine “circular data,” i.e., data where period of measurement is rotationally invariant (e.g., directions on the compass or daily hours ranging from 0 to 24, 24 being the same as 0). In psychology, however, circular statistics are hardly known at all. In the present paper, we intend to give a succinct introduction into the rationale of circular statistics and describe how this approach can be used for the detection of patterns in time, contrasting it with time-series analysis. We report data from a monitoring study, where mood and social interactions were assessed for 4 weeks in order to illustrate the use of circular statistics. Both the results of periodogram analyses and circular statistics-based results are reported. Advantages and possible pitfalls of the circular statistics approach are highlighted concluding that ambulatory assessment research can benefit from strategies borrowed from circular statistics.


Author(s):  
Trung Duy Pham ◽  
Dat Tran ◽  
Wanli Ma

In the biomedical and healthcare fields, the ownership protection of the outsourced data is becoming a challenging issue in sharing the data between data owners and data mining experts to extract hidden knowledge and patterns. Watermarking has been proved as a right-protection mechanism that provides detectable evidence for the legal ownership of a shared dataset, without compromising its usability under a wide range of data mining for digital data in different formats such as audio, video, image, relational database, text and software. Time series biomedical data such as Electroencephalography (EEG) or Electrocardiography (ECG) is valuable and costly in healthcare, which need to have owner protection when sharing or transmission in data mining application. However, this issue related to kind of data has only been investigated in little previous research as its characteristics and requirements. This paper proposes an optimized watermarking scheme to protect ownership for biomedical and healthcare systems in data mining. To achieve the highest possible robustness without losing watermark transparency, Particle Swarm Optimization (PSO) technique is used to optimize quantization steps to find a suitable one. Experimental results on EEG data show that the proposed scheme provides good imperceptibility and more robust against various signal processing techniques and common attacks such as noise addition, low-pass filtering, and re-sampling.


2018 ◽  
Vol 74 (9) ◽  
pp. 1461-1467 ◽  
Author(s):  
David A Raichlen ◽  
Yann C Klimentidis ◽  
Chiu-Hsieh Hsu ◽  
Gene E Alexander

Abstract Background Accelerometers are included in a wide range of devices that monitor and track physical activity for health-related applications. However, the clinical utility of the information embedded in their rich time-series data has been greatly understudied and has yet to be fully realized. Here, we examine the potential for fractal complexity of actigraphy data to serve as a clinical biomarker for mortality risk. Methods We use detrended fluctuation analysis (DFA) to analyze actigraphy data from the National Health and Nutrition Examination Survey (NHANES; n = 11,694). The DFA method measures fractal complexity (signal self-affinity across time-scales) as correlations between the amplitude of signal fluctuations in time-series data across a range of time-scales. The slope, α, relating the fluctuation amplitudes to the time-scales over which they were measured describes the complexity of the signal. Results Fractal complexity of physical activity (α) decreased significantly with age (p = 1.29E−6) and was lower in women compared with men (p = 1.79E−4). Higher levels of moderate-to-vigorous physical activity in older adults and in women were associated with greater fractal complexity. In adults aged 50–79 years, lower fractal complexity of activity (α) was associated with greater mortality (hazard ratio = 0.64; 95% confidence interval = 0.49–0.82) after adjusting for age, exercise engagement, chronic diseases, and other covariates associated with mortality. Conclusions Wearable accelerometers can provide a noninvasive biomarker of physiological aging and mortality risk after adjusting for other factors strongly associated with mortality. Thus, this fractal analysis of accelerometer signals provides a novel clinical application for wearable accelerometers, advancing efforts for remote monitoring of physiological health by clinicians.


2019 ◽  
Vol 34 (5) ◽  
pp. 551-561 ◽  
Author(s):  
Lakshman Abhilash ◽  
Vasu Sheeba

Research on circadian rhythms often requires researchers to estimate period, robustness/power, and phase of the rhythm. These are important to estimate, owing to the fact that they act as readouts of different features of the underlying clock. The commonly used tools, to this end, suffer from being very expensive, having very limited interactivity, being very cumbersome to use, or a combination of these. As a step toward remedying the inaccessibility to users who may not be able to afford them and to ease the analysis of biological time-series data, we have written RhythmicAlly, an open-source program using R and Shiny that has the following advantages: (1) it is free, (2) it allows subjective marking of phases on actograms, (3) it provides high interactivity with graphs, (4) it allows visualization and storing of data for a batch of individuals simultaneously, and (5) it does what other free programs do but with fewer mouse clicks, thereby being more efficient and user-friendly. Moreover, our program can be used for a wide range of ultradian, circadian, and infradian rhythms from a variety of organisms, some examples of which are described here. The first version of RhythmicAlly is available on Github, and we aim to maintain the program with subsequent versions having updated methods of visualizing and analyzing time-series data.


2017 ◽  
Vol 139 (6) ◽  
Author(s):  
Afshin Abbasi Hoseini ◽  
Sverre Steen

A framework is presented for data mining in multivariate time series collected over hours of ship operation to extract vessel states from the data. The measurements made by a ship monitoring system lead to a collection of time-organized in-service data. Usually, these time series datasets are big, complicated, and highly dimensional. The purpose of time-series data mining is to bridge the gap between a massive database and meaningful information hidden behind the data. An important aspect of the framework proposed is selecting relevant variables, eliminating unnecessary information or noises, and extracting the essential features of the problem so that the vessel behavior can be identified reliably. Principal component analysis (PCA) is employed to address the issues of multicollinearity in the data and dimensionality reduction. The data mining approach itself is established on unsupervised data clustering using self-organizing map (SOM) and k-means, and k-nearest neighbors search (k-NNS) for searching and recovering specific information from the database. As a case study, the results are based on onboard monitoring data of the Norwegian University of Science and Technology (NTNU) research vessel, “Gunnerus.” The scope of this work is limited to detecting ship maneuvers. However, it is extendable to a wide range of smart marine applications. As illustrated in the results, this approach is effective in identifying the prior unknown states of the ship with acceptable accuracy.


2020 ◽  
Author(s):  
Yu-wen Chen ◽  
Yu-jie Li ◽  
Zhi-yong Yang ◽  
Kun-hua Zhong ◽  
Li-ge Zhang ◽  
...  

Abstract Background Dynamic prediction of patients’ mortality risk in ICU with time series data is limited due to the high dimensionality, uncertainty with sampling intervals, and other issues. New deep learning method, temporal convolution network (TCN), makes it possible to deal with complex clinical time series data in ICU. We aimed to develop and validate it to predict mortality risk using time series data from MIMIC III dataset. Methods Finally, 21139 records of ICU stays were analyzed and in total 17 physiological variables from the MIMIC III dataset were used to predict mortality risk. Then we compared the model performances of attention-based TCN with traditional artificial intelligence (AI) method. Results The Area Under Receiver Operating Characteristic (AUCROC) and Area Under Precision-Recall curve (AUC-PR) of attention-based TCN for predicting the mortality risk 48 h after ICU admission were 0.837(0.824–0.850) and 0.454. The sensitivity and specificity of attention-based TCN were 67.1% and 82.6%, compared to the traditional AI method yield low sensitivity (< 50%). Conclusions Attention-based TCN model achieved better performance in prediction of mortality risk with time series data than traditional AI methods and conventional score-based models. Attention-based TCN mortality risk model has the potential for helping decision-making in critical patients.


2019 ◽  
Vol 10 (3) ◽  
pp. 27-33
Author(s):  
Ravindra Sadashivrao Apare ◽  
Satish Narayanrao Gujar

IoT (Internet of Things) is a sophisticated analytics and automation system that utilizes networking, big data, artificial intelligence, and sensing technology to distribute absolute systems for a service or product. The major challenges in IoT relies in security restrictions related with generating low cost devices, and the increasing number of devices that generates further opportunities for attacks. Hence, this article intends to develop a promising methodology associated with data privacy preservation for handling the IoT network. It is obvious that the IoT devices often generate time series data, where the range of respective time series data can be extremely large.


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 284
Author(s):  
Zhenwen He ◽  
Shirong Long ◽  
Xiaogang Ma ◽  
Hong Zhao

A large amount of time series data is being generated every day in a wide range of sensor application domains. The symbolic aggregate approximation (SAX) is a well-known time series representation method, which has a lower bound to Euclidean distance and may discretize continuous time series. SAX has been widely used for applications in various domains, such as mobile data management, financial investment, and shape discovery. However, the SAX representation has a limitation: Symbols are mapped from the average values of segments, but SAX does not consider the boundary distance in the segments. Different segments with similar average values may be mapped to the same symbols, and the SAX distance between them is 0. In this paper, we propose a novel representation named SAX-BD (boundary distance) by integrating the SAX distance with a weighted boundary distance. The experimental results show that SAX-BD significantly outperforms the SAX representation, ESAX representation, and SAX-TD representation.


Sign in / Sign up

Export Citation Format

Share Document