scholarly journals Time-dependent emotional memory transformation: divergent pathways of item memory and contextual dependency

2021 ◽  
Author(s):  
Wouter Cox ◽  
Martijn Meeter ◽  
Merel Kindt ◽  
Vanessa van Ast

Emotional memory can persist strikingly long, but it is believed that not all its elements are protected against the fading effects of time. So far, studies of emotional episodic memory have mostly investigated retention up to 24h post-encoding, and revealed that central emotional features (items) are usually strengthened, while contextual binding of the event is reduced. However, even though it is known for neutral memories that central versus contextual elements evolve differently with longer passage of time, the time-dependent evolution of emotional memories remains unclear. Hypothetically, compared to neutral memories, emotional item memory becomes increasingly stronger, accompanied by accelerated decay of – already fragile – links with their original encoding contexts, resulting in progressive reductions in contextual dependency. Here, we tested these predictions in a large-scale study. Participants encoded emotional and neutral episodes, and were assessed 30 minutes (N = 40), one day (N = 40), one week (N = 39), or two weeks (N = 39) later on item memory, contextual dependency, and subjective quality of memory. The results show that, with the passage of time, emotional memories were indeed characterized by increasingly stronger item memory and weaker contextual dependency. Interestingly, analyses of the subjective quality of memories revealed that stronger memory for emotional items with time was expressed in familiarity, whereas increasingly smaller contextual dependency for emotional episodes was reflected in recollection. Together, these findings uncover the time-dependent transformation of emotional episodic memories, thereby shedding light on the ways healthy and maladaptive human memories may develop.

Author(s):  
N. Broers ◽  
N.A. Busch

AbstractMany photographs of real-life scenes are very consistently remembered or forgotten by most people, making these images intrinsically memorable or forgettable. Although machine vision algorithms can predict a given image’s memorability very well, nothing is known about the subjective quality of these memories: are memorable images recognized based on strong feelings of familiarity or on recollection of episodic details? We tested people’s recognition memory for memorable and forgettable scenes selected from image memorability databases, which contain memorability scores for each image, based on large-scale recognition memory experiments. Specifically, we tested the effect of intrinsic memorability on recollection and familiarity using cognitive computational models based on receiver operating characteristics (ROCs; Experiment 1 and 1) and on remember/know (R/K) judgments (Experiment 1). The ROC data of Experiment 1 indicated that image memorability boosted memory strength, but did not find a specific effect on recollection or familiarity. By contrast, ROC data from Experiment 1, which was designed to facilitate encoding and, in turn, recollection, found evidence for a specific effect of image memorability on recollection. Moreover, R/K judgments showed that, on average, memorability boosts recollection rather than familiarity. However, we also found a large degree of variability in these judgments across individual images: some images actually achieved high recognition rates by exclusively boosting familiarity rather than recollection. Together, these results show that current machine vision algorithms that can predict an image’s intrinsic memorability in terms of hit rates fall short of describing the subjective quality of human memories.


2019 ◽  
Author(s):  
Nico Broers ◽  
Niko Busch

Many photographs of real-life scenes are very consistently remembered or forgotten by most people, making these images intrinsically memorable or forgettable. Although machine vision algorithms can predict a given image’s memorability very well, nothing is known about the subjective quality of these memories: are memorable images recognized based on strong feelings of familiarity or on recollection of episodic details? We tested people’s recognition memory for memorable and forgettable scenes selected from image memorability databases, which contain memorability scores for each image, based on large-scale recognition memory experiments. Specifically, we tested the effect of intrinsic memorability on recollection and familiarity using cognitive computational models based on Receiver Operating Characteristics (ROCs; Experiment 1 and 2) and on remember/know (R/K) judgments (Experiment 2). The ROC data of experiment 1 indicated that image memorability boosted memory strength, but did not find a specific effect on recollection or familiarity. By contrast, ROC data from Experiment 2, which was designed to facilitate encoding and, in turn, recollection, found more evidence for a specific effect of image memorability on recollection. Moreover, R/K judgments showed that, on average, memorability boosts recollection rather than familiarity. However, we also found a large degree of variability in these ratings across individual images: some images actually achieved high recognition rates by exclusively boosting familiarity rather than recollection. Together, these results show that current machine vision algorithms that can predict an image’s intrinsic memorability in terms of hit rates fall short of describing the subjective quality of human memories.


Author(s):  
A. Babirad

Cerebrovascular diseases are a problem of the world today, and according to the forecast, the problem of the near future arises. The main risk factors for the development of ischemic disorders of the cerebral circulation include oblique and aging, arterial hypertension, smoking, diabetes mellitus and heart disease. An effective strategy for the prevention of cerebrovascular events is based on the implementation of large-scale risk control measures, including the use of antiagregant and anticoagulant therapy, invasive interventions such as atheromectomy, angioplasty and stenting. In this connection, the efforts of neurologists, cardiologists, angiosurgery, endocrinologists and other specialists are the basis for achieving an acceptable clinical outcome. A review of the SF-36 method for assessing the quality of life in patients with the effects of transient ischemic stroke is presented. The assessment of quality of life is recognized in world medical practice and research, an indicator that is also used to assess the quality of the health system and in general sociological research.


2019 ◽  
Author(s):  
Kamal Batra ◽  
Stefan Zahn ◽  
Thomas Heine

<p>We thoroughly benchmark time-dependent density- functional theory for the predictive calculation of UV/Vis spectra of porphyrin derivatives. With the aim to provide an approach that is computationally feasible for large-scale applications such as biological systems or molecular framework materials, albeit performing with high accuracy for the Q-bands, we compare the results given by various computational protocols, including basis sets, density-functionals (including gradient corrected local functionals, hybrids, double hybrids and range-separated functionals), and various variants of time-dependent density-functional theory, including the simplified Tamm-Dancoff approximation. An excellent choice for these calculations is the range-separated functional CAM-B3LYP in combination with the simplified Tamm-Dancoff approximation and a basis set of double-ζ quality def2-SVP (mean absolute error [MAE] of ~0.05 eV). This is not surpassed by more expensive approaches, not even by double hybrid functionals, and solely systematic excitation energy scaling slightly improves the results (MAE ~0.04 eV). </p>


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Vol 103 (11) ◽  
pp. 1194-1199

Objective: To develop and validate a Thai version of the Wisconsin Quality of Life (TH WISQoL) Questionnaire. Materials and Methods: The authors developed the TH WISQoL Questionnaire based on a standard multi-step process. Subsequently, the authors recruited patients with kidney stone and requested them to complete the TH WISQoL and a validated Thai version of the 36-Item Short Form Survey (TH SF-36). The authors calculated the internal consistency and interdomain correlation of TH WISQoL and compared the convergent validity between the two instruments. Results: Thirty kidney stone patients completed the TH WISQoL and the TH SF-36. The TH WISQoL showed acceptable internal consistency for all domains (Cronbach’s alpha 0.768 to 0.909). Interdomain correlation was high for most domains (r=0.698 to 0.779), except for the correlation between Vitality and Disease domains, which showed a moderate correlation (r=0.575). For convergent validity, TH WISQoL demonstrated a good overall correlation to TH SF-36, (r=0.796, p<0.05). Conclusion: The TH WISQoL is valid and reliable for evaluating the quality of life of Thai patients with kidney stone. A further large-scale multi-center study is warranted to confirm its applicability in Thailand. Keywords: Quality of life, Kidney stone, Validation, Outcome measurement


Author(s):  
Jeasik Cho

This book provides the qualitative research community with some insight on how to evaluate the quality of qualitative research. This topic has gained little attention during the past few decades. We, qualitative researchers, read journal articles, serve on masters’ and doctoral committees, and also make decisions on whether conference proposals, manuscripts, or large-scale grant proposals should be accepted or rejected. It is assumed that various perspectives or criteria, depending on various paradigms, theories, or fields of discipline, have been used in assessing the quality of qualitative research. Nonetheless, until now, no textbook has been specifically devoted to exploring theories, practices, and reflections associated with the evaluation of qualitative research. This book constructs a typology of evaluating qualitative research, examines actual information from websites and qualitative journal editors, and reflects on some challenges that are currently encountered by the qualitative research community. Many different kinds of journals’ review guidelines and available assessment tools are collected and analyzed. Consequently, core criteria that stand out among these evaluation tools are presented. Readers are invited to join the author to confidently proclaim: “Fortunately, there are commonly agreed, bold standards for evaluating the goodness of qualitative research in the academic research community. These standards are a part of what is generally called ‘scientific research.’ ”


SLEEP ◽  
2020 ◽  
Author(s):  
Luca Menghini ◽  
Nicola Cellini ◽  
Aimee Goldstone ◽  
Fiona C Baker ◽  
Massimiliano de Zambotti

Abstract Sleep-tracking devices, particularly within the consumer sleep technology (CST) space, are increasingly used in both research and clinical settings, providing new opportunities for large-scale data collection in highly ecological conditions. Due to the fast pace of the CST industry combined with the lack of a standardized framework to evaluate the performance of sleep trackers, their accuracy and reliability in measuring sleep remains largely unknown. Here, we provide a step-by-step analytical framework for evaluating the performance of sleep trackers (including standard actigraphy), as compared with gold-standard polysomnography (PSG) or other reference methods. The analytical guidelines are based on recent recommendations for evaluating and using CST from our group and others (de Zambotti and colleagues; Depner and colleagues), and include raw data organization as well as critical analytical procedures, including discrepancy analysis, Bland–Altman plots, and epoch-by-epoch analysis. Analytical steps are accompanied by open-source R functions (depicted at https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html). In addition, an empirical sample dataset is used to describe and discuss the main outcomes of the proposed pipeline. The guidelines and the accompanying functions are aimed at standardizing the testing of CSTs performance, to not only increase the replicability of validation studies, but also to provide ready-to-use tools to researchers and clinicians. All in all, this work can help to increase the efficiency, interpretation, and quality of validation studies, and to improve the informed adoption of CST in research and clinical settings.


Sign in / Sign up

Export Citation Format

Share Document