Decoding Complex Cognitive States Online by Manifold Regularization in Real-Time fMRI

Author(s):  
Toke Jansen Hansen ◽  
Lars Kai Hansen ◽  
Kristoffer Hougaard Madsen
Author(s):  
Peter D. MacIntyre ◽  
Tammy Gregersen

Abstract The idiodynamic method is a relatively new mixed-method approach to studying in real time the complex dynamics of integrated affective and cognitive states that interact continuously with human communication. The method requires video recording a sample of communication from a research participant and then using specialized software to play the video back while collecting contemporaneous self-reported ratings (approximately one per second) on one or more focal variables of interest to the researcher, such as willingness to communicate (WTC) or communication anxiety (CA). After the participant rates the communication sample, a continuous graph of changes in the focal variable is printed. The final step is to interview the speaker to gather an explanation for changes in the ratings, for example at peaks or valleys in the graph. The method can also collect observer ratings that can then be compared with the speaker’s self-ratings. To date, studies have been conducted examining WTC, CA, motivation, perceived competence, teacher self-efficacy, teacher empathy, and strategy use, among other topics. The strengths and limitations of the method will be discussed and a specific example of its use in measuring WTC and CA will be provided.


Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 12
Author(s):  
Yixiang Lim ◽  
Nichakorn Pongsarkornsathien ◽  
Alessandro Gardi ◽  
Roberto Sabatini ◽  
Trevor Kistan ◽  
...  

Advances in unmanned aircraft systems (UAS) have paved the way for progressively higher levels of intelligence and autonomy, supporting new modes of operation, such as the one-to-many (OTM) concept, where a single human operator is responsible for monitoring and coordinating the tasks of multiple unmanned aerial vehicles (UAVs). This paper presents the development and evaluation of cognitive human-machine interfaces and interactions (CHMI2) supporting adaptive automation in OTM applications. A CHMI2 system comprises a network of neurophysiological sensors and machine-learning based models for inferring user cognitive states, as well as the adaptation engine containing a set of transition logics for control/display functions and discrete autonomy levels. Models of the user’s cognitive states are trained on past performance and neurophysiological data during an offline calibration phase, and subsequently used in the online adaptation phase for real-time inference of these cognitive states. To investigate adaptive automation in OTM applications, a scenario involving bushfire detection was developed where a single human operator is responsible for tasking multiple UAV platforms to search for and localize bushfires over a wide area. We present the architecture and design of the UAS simulation environment that was developed, together with various human-machine interface (HMI) formats and functions, to evaluate the CHMI2 system’s feasibility through human-in-the-loop (HITL) experiments. The CHMI2 module was subsequently integrated into the simulation environment, providing the sensing, inference, and adaptation capabilities needed to realise adaptive automation. HITL experiments were performed to verify the CHMI2 module’s functionalities in the offline calibration and online adaptation phases. In particular, results from the online adaptation phase showed that the system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation. However, the accuracy of the inferred workload was variable across the different participants (with a root mean squared error (RMSE) ranging from 0.2 to 0.6), partly due to the reduced number of neurophysiological features available as real-time inputs and also due to limited training stages in the offline calibration phase. To improve the performance of the system, future work will investigate the use of alternative machine learning techniques, additional neurophysiological input features, and a more extensive training stage.


Author(s):  
Martha E. Crosby ◽  
Curtis S. Ikehara

This chapter describes our research focused on deriving changing cognitive state information from the patterns of data acquired from the user, with the goal of using this information to improve the presentation of multimedia computer information. Detecting individual differences via performance and psychometric tools can be supplemented by using real-time physiological sensors. Described is an example computer task that demonstrates how cognitive load is manipulated. The different types of physiological and cognitive state measures are discussed along with their advantages and disadvantages. Experimental results from eye tracking and the pressures applied to a computer mouse are described in greater detail. Finally, adaptive information filtering is discussed as a model for using the physiological information to improve computer performance. Study results provide support that we can create effective ways to adapt to a person’s cognition in real time and thus facilitate real-world tasks.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 81
Author(s):  
Lars J. Planke ◽  
Alessandro Gardi ◽  
Roberto Sabatini ◽  
Trevor Kistan ◽  
Neta Ezer

With increasingly higher levels of automation in aerospace decision support systems, it is imperative that the human operator maintains a high level of situational awareness in different operational conditions and a central role in the decision-making process. While current aerospace systems and interfaces are limited in their adaptability, a Cognitive Human Machine System (CHMS) aims to perform dynamic, real-time system adaptation by estimating the cognitive states of the human operator. Nevertheless, to reliably drive system adaptation of current and emerging aerospace systems, there is a need to accurately and repeatably estimate cognitive states, particularly for Mental Workload (MWL), in real-time. As part of this study, two sessions were performed during a Multi-Attribute Task Battery (MATB) scenario, including a session for offline calibration and validation and a session for online validation of eleven multimodal inference models of MWL. The multimodal inference model implemented included an Adaptive Neuro Fuzzy Inference System (ANFIS), which was used in different configurations to fuse data from an Electroencephalogram (EEG) model’s output, four eye activity features and a control input feature. The results from the online validation of the ANFIS models demonstrated that five of the ANFIS models (containing different feature combinations of eye activity and control input features) all demonstrated good results, while the best performing model (containing all four eye activity features and the control input feature) showed an average Mean Absolute Error (MAE) = 0.67 ± 0.18 and Correlation Coefficient (CC) = 0.71 ± 0.15. The remaining six ANFIS models included data from the EEG model’s output, which had an offset discrepancy. This resulted in an equivalent offset for the online multimodal fusion. Nonetheless, the efficacy of these ANFIS models could be seen with the pairwise correlation with the task level, where one model demonstrated a CC = 0.77 ± 0.06, which was the highest among all the ANFIS models tested. Hence, this study demonstrates the ability for online multimodal fusion from features extracted from EEG signals, eye activity and control inputs to produce an accurate and repeatable inference of MWL.


Author(s):  
Feng Zhou ◽  
Jianxin (Roger) Jiao

Vehicles with better usability have become increasingly popular due to their ease of operations and safety for driving. However, the way how usability of in-vehicle system user interface is studied still needs improvement. This paper concerns how to use advanced computational, neurophysiology- and psychology-based tools and methodologies to determine affective (emotional) states and behavioral data of an individual in real time and in turn how to adapt the human-vehicle interaction to meet the user’s cognitive needs based on this real-time assessment. Specifically, we set up a set of neuro-physiological equipment that is capable of collecting EEG, facial EMG (electromyography), skin conductance response, and respiration data and a set of motion sensing and tracking equipment that is capable of eye ball movement and objects that the user interacts. All hardware components and software is integrated into a cohesive augmented sensor platform that can perform as “one coherent system” to enable multi-modal data processing and information inference for context-aware analysis of affective and cognitive states based on the rough set inference engine. Meanwhile subjective data is also recorded for comparison. A usability study of in-vehicle system UI is shown to demonstrate the potential of the proposed methodology.


1979 ◽  
Vol 44 ◽  
pp. 41-47
Author(s):  
Donald A. Landman

This paper describes some recent results of our quiescent prominence spectrometry program at the Mees Solar Observatory on Haleakala. The observations were made with the 25 cm coronagraph/coudé spectrograph system using a silicon vidicon detector. This detector consists of 500 contiguous channels covering approximately 6 or 80 Å, depending on the grating used. The instrument is interfaced to the Observatory’s PDP 11/45 computer system, and has the important advantages of wide spectral response, linearity and signal-averaging with real-time display. Its principal drawback is the relatively small target size. For the present work, the aperture was about 3″ × 5″. Absolute intensity calibrations were made by measuring quiet regions near sun center.


Author(s):  
Alan S. Rudolph ◽  
Ronald R. Price

We have employed cryoelectron microscopy to visualize events that occur during the freeze-drying of artificial membranes by employing real time video capture techniques. Artificial membranes or liposomes which are spherical structures within internal aqueous space are stabilized by water which provides the driving force for spontaneous self-assembly of these structures. Previous assays of damage to these structures which are induced by freeze drying reveal that the two principal deleterious events that occur are 1) fusion of liposomes and 2) leakage of contents trapped within the liposome [1]. In the past the only way to access these events was to examine the liposomes following the dehydration event. This technique allows the event to be monitored in real time as the liposomes destabilize and as water is sublimed at cryo temperatures in the vacuum of the microscope. The method by which liposomes are compromised by freeze-drying are largely unknown. This technique has shown that cryo-protectants such as glycerol and carbohydrates are able to maintain liposomal structure throughout the drying process.


Author(s):  
R.P. Goehner ◽  
W.T. Hatfield ◽  
Prakash Rao

Computer programs are now available in various laboratories for the indexing and simulation of transmission electron diffraction patterns. Although these programs address themselves to the solution of various aspects of the indexing and simulation process, the ultimate goal is to perform real time diffraction pattern analysis directly off of the imaging screen of the transmission electron microscope. The program to be described in this paper represents one step prior to real time analysis. It involves the combination of two programs, described in an earlier paper(l), into a single program for use on an interactive basis with a minicomputer. In our case, the minicomputer is an INTERDATA 70 equipped with a Tektronix 4010-1 graphical display terminal and hard copy unit.A simplified flow diagram of the combined program, written in Fortran IV, is shown in Figure 1. It consists of two programs INDEX and TEDP which index and simulate electron diffraction patterns respectively. The user has the option of choosing either the indexing or simulating aspects of the combined program.


Author(s):  
R. Rajesh ◽  
R. Droopad ◽  
C. H. Kuo ◽  
R. W. Carpenter ◽  
G. N. Maracas

Knowledge of material pseudodielectric functions at MBE growth temperatures is essential for achieving in-situ, real time growth control. This allows us to accurately monitor and control thicknesses of the layers during growth. Undesired effusion cell temperature fluctuations during growth can thus be compensated for in real-time by spectroscopic ellipsometry. The accuracy in determining pseudodielectric functions is increased if one does not require applying a structure model to correct for the presence of an unknown surface layer such as a native oxide. Performing these measurements in an MBE reactor on as-grown material gives us this advantage. Thus, a simple three phase model (vacuum/thin film/substrate) can be used to obtain thin film data without uncertainties arising from a surface oxide layer of unknown composition and temperature dependence.In this study, we obtain the pseudodielectric functions of MBE-grown AlAs from growth temperature (650°C) to room temperature (30°C). The profile of the wavelength-dependent function from the ellipsometry data indicated a rough surface after growth of 0.5 μm of AlAs at a substrate temperature of 600°C, which is typical for MBE-growth of GaAs.


Sign in / Sign up

Export Citation Format

Share Document