scholarly journals Targeting targeted memory reactivation: characteristics of cued reactivation in sleep

2021 ◽  
Author(s):  
Mahmoud E. A. Abdellahi ◽  
Anne C. M. Koopman ◽  
Matthias S. Treder ◽  
Penelope A. Lewis

Targeted memory reactivation (TMR) is a technique in which sensory cues associated with memories during wake are used to trigger memory reactivation during subsequent sleep. The characteristics of such cued reactivation, and the optimal placement of TMR cues, remain to be determined. We built an EEG classification pipeline that discriminated reactivation of right- and left-handed movements and found that cues which fall on the up-going transition of the slow oscillation (SO) are more likely to elicit a classifiable reactivation. We also used a novel machine learning pipeline to predict the likelihood of eliciting a classifiable reactivation after each TMR cue using the presence of spindles and features of SOs. Finally, we found that reactivations occurred either immediately after the cue or one second later. These findings greatly extend our understanding of memory reactivation and pave the way for development of wearable technologies to efficiently enhance memory through cueing in sleep.

Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2021 ◽  
Vol 12 (1) ◽  
pp. 101-112
Author(s):  
Kishore Sugali ◽  
Chris Sprunger ◽  
Venkata N Inukollu

The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.


2018 ◽  
Vol 26 (5) ◽  
pp. 1755-1758 ◽  
Author(s):  
Sirish Shrestha ◽  
Partho P. Sengupta

2021 ◽  
Vol 9 (2) ◽  
pp. 1-19
Author(s):  
Lawrence A. Gordon

The objective of this paper is to assess the impact of data analytics (DA) and machine learning (ML) on accounting research.[1] As discussed in the paper, the inherent inductive nature of DA and ML is creating an important trend in the way accounting research is being conducted. That trend is the increasing utilization of inductive-based research among accounting researchers. Indeed, as a result of the recent developments with DA and ML, a rebalancing is taking place between inductive-based and deductive-based research in accounting.[2] In essence, we are witnessing the resurrection of inductive-based accounting research. A brief review of some empirical evidence to support the above argument is also provided in the paper.   


2021 ◽  
Author(s):  
Tareq Aziz AL-Qutami ◽  
Fatin Awina Awis

Abstract Real-time location information is essential in the hazardous process and construction areas for safety and emergency management, security, search and rescue, and even productivity tracking. It's also crucial during pandemics such as the COVID-19 pandemic for contact tracing to isolate those who came to the proximity of infected individuals. While global positioning systems (GPS), can address the demand for location awareness in outdoor environments, another accurate location estimation technology for indoor environments where GPS doesn't perform well is required. This paper presents the development and deployment of an end-to-end cost-effective real-time personnel location system suitable for both indoor and outdoor hazardous and safe areas. It leverages on facility wireless communication systems, wearable technologies such as smart helmets and wearable tags, and machine learning. Personnel carries the client device which collects location-related information and sends it to the localization algorithm in the cloud. When the personnel moves, the tracking dashboard shows client location in real-time. The proposed localization algorithm relies on wireless signal fingerprinting and machine learning algorithms to estimate the location. The machine learning algorithm is a mix of clustering and classification that was designed to scale well with bigger target areas and is suitable for cloud deployment. The system was tested in both office and industrial process environments using consumer-grade handphones and intrinsically safe wearable devices. It achieved an average distance error of less than 2 meters in 3D space.


2018 ◽  
pp. 489-516
Author(s):  
Alessio Drivet

Wearable Technologies represent an emerging theme. Probably the next emerging market where companies will focus. These devices are replacing entire categories of electronic objects in everyday life and affect the way we live, work and socialize. Among the many applications available, the author limits attention to the field of the smart video cameras. This chapter examines some of the most interesting applications of wearable cameras, with special reference to the Italian situation. In particular, the text traces a summary of the main applications in sports, spying, police, army, education, health, disabilities, and lifelogging. A part is devoted to “wearable extensions” and the concept of augmented reality.


2021 ◽  
pp. 311-322
Author(s):  
Juan Carlos Gómez-López ◽  
Juan José Escobar ◽  
Jesús González ◽  
Francisco Gil-Montoya ◽  
Julio Ortega ◽  
...  

2019 ◽  
Vol 24 (12) ◽  
pp. 9243-9256
Author(s):  
Jordan J. Bird ◽  
Anikó Ekárt ◽  
Diego R. Faria

Abstract In this work, we argue that the implications of pseudorandom and quantum-random number generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in soft computing until this work. We use a CPU and a QPU to generate random numbers for multiple machine learning techniques. Random numbers are employed in the random initial weight distributions of dense and convolutional neural networks, in which results show a profound difference in learning patterns for the two. In 50 dense neural networks (25 PRNG/25 QRNG), QRNG increases over PRNG for accent classification at + 0.1%, and QRNG exceeded PRNG for mental state EEG classification by + 2.82%. In 50 convolutional neural networks (25 PRNG/25 QRNG), the MNIST and CIFAR-10 problems are benchmarked, and in MNIST the QRNG experiences a higher starting accuracy than the PRNG but ultimately only exceeds it by 0.02%. In CIFAR-10, the QRNG outperforms PRNG by + 0.92%. The n-random split of a Random Tree is enhanced towards and new Quantum Random Tree (QRT) model, which has differing classification abilities to its classical counterpart, 200 trees are trained and compared (100 PRNG/100 QRNG). Using the accent and EEG classification data sets, a QRT seemed inferior to a RT as it performed on average worse by − 0.12%. This pattern is also seen in the EEG classification problem, where a QRT performs worse than a RT by − 0.28%. Finally, the QRT is ensembled into a Quantum Random Forest (QRF), which also has a noticeable effect when compared to the standard Random Forest (RF). Ten to 100 ensembles of trees are benchmarked for the accent and EEG classification problems. In accent classification, the best RF (100 RT) outperforms the best QRF (100 QRF) by 0.14% accuracy. In EEG classification, the best RF (100 RT) outperforms the best QRF (100 QRT) by 0.08% but is extremely more complex, requiring twice the amount of trees in committee. All differences are observed to be situationally positive or negative and thus are likely data dependent in their observed functional behaviour.


Author(s):  
Peter Flach

This paper gives an overview of some ways in which our understanding of performance evaluation measures for machine-learned classifiers has improved over the last twenty years. I also highlight a range of areas where this understanding is still lacking, leading to ill-advised practices in classifier evaluation. This suggests that in order to make further progress we need to develop a proper measurement theory of machine learning. I then demonstrate by example what such a measurement theory might look like and what kinds of new results it would entail. Finally, I argue that key properties such as classification ability and data set difficulty are unlikely to be directly observable, suggesting the need for latent-variable models and causal inference.


Sign in / Sign up

Export Citation Format

Share Document