indirect measure
Recently Published Documents


TOTAL DOCUMENTS

484
(FIVE YEARS 146)

H-INDEX

35
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Javier Ballester ◽  
Anne K. Baker ◽  
Ilkka K. Martikainen ◽  
Vincent Koppelmans ◽  
Jon-Kar Zubieta ◽  
...  

Abstractµ-Opioid receptors (MOR) are a major target of endogenous and exogenous opioids, including opioid pain medications. The µ-opioid neurotransmitter system is heavily implicated in the pathophysiology of chronic pain and opioid use disorder and, as such, central measures of µ-opioid system functioning are increasingly being considered as putative biomarkers for risk to misuse opioids. To explore the relationship between MOR system function and risk for opioid misuse, 28 subjects with chronic nonspecific back pain completed a clinically validated measure of opioid misuse risk, the Pain Medication Questionnaire (PMQ), and were subsequently separated into high (PMQ > 21) and low (PMQ ≤ 21) opioid misuse risk groups. Chronic pain patients along with 15 control participants underwent two separate [11C]-carfentanil positron emission tomography scans to explore MOR functional measures: one at baseline and one during a sustained pain-stress challenge, with the difference between the two providing an indirect measure of stress-induced endogenous opioid release. We found that chronic pain participants at high risk for opioid misuse displayed higher baseline MOR availability within the right amygdala relative to those at low risk. By contrast, patients at low risk for opioid misuse showed less pain-induced activation of MOR-mediated, endogenous opioid neurotransmission in the nucleus accumbens. This study links human in vivo MOR system functional measures to the development of addictive disorders and provides novel evidence that MORs and µ-opioid system responsivity may underlie risk to misuse opioids among chronic pain patients.


2022 ◽  
Vol 15 ◽  
Author(s):  
Johanna M. Rimmele ◽  
Pius Kern ◽  
Christina Lubinus ◽  
Klaus Frieler ◽  
David Poeppel ◽  
...  

Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.


2021 ◽  
Vol 23 (4) ◽  
pp. 347-351
Author(s):  
Reena Kumari Jha ◽  
Samjhana Thapa ◽  
Roshan Kasti ◽  
Sumi Singh

Reaction time is an indirect index of the processing speed of the central nervous system. It is affected by several factors including dominant and non-dominant hands and obesity. Obesity can be measured by body mass index. Thus, the aim of this study was to find out the relationship between body mass index, dominant and non-dominant hand with visual reaction time in healthy young females. A cross-sectional study was conducted in the Department of Physiology, among 89 females. The height and weight were recorded, and the body mass index was calculated. The subjects were divided into four groups, underweight, normal weight, overweight, and obese according to WHO criteria. Visual reaction time was measured using the ruler drop method in milliseconds. The data were analyzed by using the paired t-test and one-way ANOVA using the IBM Statistical Package for the Social Sciences version 22. Out of 89 participants, 26 (29.21%) were underweight, 47 (52.80%) had normal weight, 12 (13.48%) and four (4.49%) were overweight and obese with mean reaction time in the dominant hand and non-dominant hand were (176.75±16.68 vs. 186.58±16.21), (175.12±15.03 vs. 185.43±15.64), (188.74±16.07 vs. 190.70±17.88), and (200.7±9.77 vs. 210.50±9.50) respectively. All participants were right-handers. In right-handers, the right hand reacted faster than the left hand. Reaction time was prolonged in underweight, overweight,and obese; when compared with normal weight individuals. Our study showed that the reaction time of people appears to be influenced by body mass index, dominant, and non-dominant hand, which was an indirect measure of the sensory motor association.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261295
Author(s):  
Florian Langner ◽  
Julie G. Arenberg ◽  
Andreas Büchner ◽  
Waldo Nogueira

Objectives The relationship between electrode-nerve interface (ENI) estimates and inter-subject differences in speech performance with sequential and simultaneous channel stimulation in adult cochlear implant listeners were explored. We investigated the hypothesis that individuals with good ENIs would perform better with simultaneous compared to sequential channel stimulation speech processing strategies than those estimated to have poor ENIs. Methods Fourteen postlingually deaf implanted cochlear implant users participated in the study. Speech understanding was assessed with a sentence test at signal-to-noise ratios that resulted in 50% performance for each user with the baseline strategy F120 Sequential. Two simultaneous stimulation strategies with either two (Paired) or three sets of virtual channels (Triplet) were tested at the same signal-to-noise ratio. ENI measures were estimated through: (I) voltage spread with electrical field imaging, (II) behavioral detection thresholds with focused stimulation, and (III) slope (IPG slope effect) and 50%-point differences (dB offset effect) of amplitude growth functions from electrically evoked compound action potentials with two interphase gaps. Results A significant effect of strategy on speech understanding performance was found, with Triplets showing a trend towards worse speech understanding performance than sequential stimulation. Focused thresholds correlated positively with the difference required to reach most comfortable level (MCL) between Sequential and Triplet strategies, an indirect measure of channel interaction. A significant offset effect (difference in dB between 50%-point for higher eCAP growth function slopes with two IPGs) was observed. No significant correlation was observed between the slopes for the two IPGs tested. None of the measures used in this study correlated with the differences in speech understanding scores between strategies. Conclusions The ENI measure based on behavioral focused thresholds could explain some of the difference in MCLs, but none of the ENI measures could explain the decrease in speech understanding with increasing pairs of simultaneously stimulated electrodes in processing strategies.


Author(s):  
Rebekka Schröder ◽  
Martin Reuter ◽  
Kaja Faßbender ◽  
Thomas Plieger ◽  
Jessie Poulsen ◽  
...  

Abstract Rationale Nicotine has been widely studied for its pro-dopaminergic effects. However, at the behavioural level, past investigations have yielded heterogeneous results concerning effects on cognitive, affective, and motor outcomes, possibly linked to individual differences at the level of genetics. A candidate polymorphism is the 40-base-pair variable number of tandem repeats polymorphism (rs28363170) in the SLC6A3 gene coding for the dopamine transporter (DAT). The polymorphism has been associated with striatal DAT availability (9R-carriers > 10R-homozygotes), and 9R-carriers have been shown to react more strongly to dopamine agonistic pharmacological challenges than 10R-homozygotes. Objectives In this preregistered study, we hypothesized that 9R-carriers would be more responsive to nicotine due to genotype-related differences in DAT availability and resulting dopamine activity. Methods N=194 non-smokers were grouped according to their genotype (9R-carriers, 10R-homozygotes) and received either 2-mg nicotine or placebo gum in a between-subject design. Spontaneous blink rate (SBR) was obtained as an indirect measure of striatal dopamine activity and smooth pursuit, stop signal, simple choice and affective processing tasks were carried out in randomized order. Results Reaction times were decreased under nicotine compared to placebo in the simple choice and stop signal tasks, but nicotine and genotype had no effects on any of the other task outcomes. Conditional process analyses testing the mediating effect of SBR on performance and how this is affected by genotype yielded no significant results. Conclusions Overall, we could not confirm our main hypothesis. Individual differences in nicotine response could not be explained by rs28363170 genotype.


2021 ◽  
pp. S3-S11
Author(s):  
M. Kužma ◽  
P. Jackuliak ◽  
Z. Killinger ◽  
J. Payer

Parathyroid hormone (PTH) increases the release of serum calcium through osteoclasts, which leads to bone resorption. Primary, PTH stimulates osteoblasts leading to increase RANKL (receptor activator for nuclear factor kappa-B ligand) expression and thus differentiation of osteoclasts. In kidneys, PTH increases calcium and decrease phosphate reabsorption. In kidneys, PTH stimulates 1alpha-hydroxylase to synthesize active vitamin D. Primary hyperparathyroidism (PHPT) is characterized by skeletal or renal complications. Nowadays, the classical form of PHPT is less seen and asymptomatic or subclinical (oligo symptomatic) forms are more frequent. Previously, it was thought that cortical bone is preferably affected by PHPT and that predispose bones to fracture at sites with a higher amount of cortical bone. However, an increased risk of vertebral fractures has been found by most of the studies showing that also trabecular bone is affected. Bone Mass measurement (BMD) at all skeletal sites is advised, but another specific tool for fracture assessment is needed. Trabecular bone score (TBS), an indirect measure of trabecular bone, maybe a useful method to estimate fracture risk. TBS is associated with vertebral fractures in PHPT regardless of BMD, age, BMI and gender. Furthermore, there is an association between TBS and high resolution peripheral quantitative computed tomography (HR-pQCT) parameters in the trabecular and cortical compartment. However, studies considering the effect of PHPT treatment on TBS are more conflicting. Secondary hyperparathyroidism caused by vitamin D deficiency was associated with impaired bone microarchitecture in all age categories, as measured by TBS and Hr-pQCT with further improvement after treatment with vitamin D.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 3859-3859
Author(s):  
Maria Gabelli ◽  
Macarena Oporto Espuelas ◽  
Denise Bonney ◽  
Saskia Burridge ◽  
Susan Farish ◽  
...  

Abstract Chimeric antigen receptor (CAR) T-cell therapy is a new, effective treatment for patients with relapsed/refractory (r/r) B-cell acute lymphoblastic leukaemia (ALL). Tisagenlecleucel achieved a complete remission (CR) rate and minimal residual disease (MRD) negativity of 81% at 3 months in the pivotal study; overall survival (OS) was 76% at 12 months (Maude et al, 2018). Real world data confirmed similar outcomes, with 1-year OS of 77% and event free survival (EFS) of 52% (Pasquini et al, 2020). Relapse can occur in the form of CD19 negative or CD19 positive ALL. The latter is associated with lack of persistence of the CAR T product. B-cell aplasia (BCA) is an indirect measure of CAR T presence. Early (<6 months from infusion) loss of BCA is associated with high relapse risk (Pillai et al, 2019); therefore, allogeneic stem cell transplantation (SCT) is often considered. However, SCT is associated with therapy-related morbidity and mortality and not all patients will find a suitable donor. Therefore, the optimal management of patients with loss of BCA is yet to be defined. In our centre, we administered maintenance therapy to a cohort of children with early loss of BCA. When compared to UK patients undergoing SCT for the same indication, we noted promising early outcomes. We report the findings here. We collected data on children with r/r ALL treated with tisagenlecleucel at Great Ormond Street Hospital (GOSH) from January 2018 to January 2021 who presented loss of BCA without evidence of disease (negative molecular or flow cytometry MRD) within 12 months from infusion. Loss of BCA was defined as peripheral B-cell count ≥0.10 x 10^9/L or bone marrow (BM) CD19+ events ≥0.1%. We compared outcomes of children who received maintenance as per UKALL 2011 protocol at GOSH to those who received SCT for the same indication from all UK paediatric centres. Fourteen patients from GOSH met the inclusion criteria. Four had loss of BCA after 6 months from CAR T infusion, none of them received additional therapy and they are all alive and in CR at a median of 535 days after CAR T infusion (Figure 1, A and B). Ten patients recovered B cells at <6 months: 3 proceeded to SCT, 6 started on maintenance therapy, 1 received other treatment. In 2 cases, maintenance was commenced after a second CAR T infusion. Two patients from the UK cohort met the inclusion criteria for the SCT group. Analysis was performed on 6 children who received maintenance and 5 who had SCT. Baseline characteristics of the 2 groups were similar (male/female ratio, median age at infusion, cytogenetics). Time from infusion to loss of BCA did not differ: the median was 80 days (range 28-168) in children who had maintenance vs 93 days (range 28-150) in those who had SCT. At a median follow up of 511 days (range 222-812), 3/6 children who received maintenance relapsed at median 210 days after infusion and proceeded to further treatment, no patient relapsed post SCT. One child died of disease in the maintenance group 237 days after infusion, 2 children died of transplant related mortality in the SCT group at 222 and 422 days post infusion. OS and EFS did not differ statistically between the 2 groups, as shown in Figure 1 (C and D). We observed that outcome for patients who presented loss of BCA within or at 2 months from infusion was poor regardless of the intervention (maintenance or SCT). Management of patients who experience early loss of BCA after CAR T is challenging and there are little data to support optimal treatment. In our experience, maintenance therapy compared favourably with SCT with similar rate of OS and EFS. Of note, 2/5 patients died of TRM in the SCT group highlighting the toxicity of this approach in such heavily pre-treated patients. On the other hand, maintenance is a well tolerated, low-cost treatment which can be easily delivered on an out-patient basis. Our preliminary data support investigation of this strategy in larger, prospectively-recruited cohorts of patients. Moreover, our preliminary data suggest that the time of loss of BCA is a crucial clinical parameter, as children who developed it within 2 months from infusion had the worst outcome, possibly reflecting prior therapy intensity and its impact on autologous T cells. Figure 1 Figure 1. Disclosures Amrolia: ADC Therapeutics: Other: Named inventor on a patent which is being transferred to ADCT.; Autolus: Patents & Royalties. Ghorashian: UCLB: Patents & Royalties: CARPALL; Novartis: Honoraria.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S283-S283
Author(s):  
Danielle Dixon ◽  
Julieta Madrid-Morales ◽  
Jose Cadena-Zuluaga ◽  
Christopher R Frei

Abstract Background One of the tests used to identify COVID-19 infections is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) test. There is a measure known as the cycle threshold (Ct) value, which provides an indirect measure of viral load. It has been proposed that the Ct value could help with clinical decisions regarding duration of isolation. We hypothesize that Ct values will correlate with symptom duration in a population of veterans with COVID-19 infection. Methods We reviewed the records of patients presenting to the emergency department (ED) or admitted to Audie L. Murphy VA Medical Center in San Antonio, Texas with positive SARS-CoV-2 PCR tests. We looked at patients who received multiple SARS-CoV-2 RT-qPCR tests. We compared date of onset of symptoms and cycle threshold values from their initial test to another test ordered after 7, 10, and 20 days from symptom onset. We recorded the Ct value for the N2 and E genes. Patients were classified into mild, severe and critical based on Center for Disease Control and Prevention (CDC) criteria. A Ct value of >30 as threshold for transmissible disease was used based on previously published studies. Results We identified 49 patients with more than two SARS-CoV-2 RT-qPCR tests. Patients with mild disease with tests less than or equal to ten days from symptom onset (n=10) had a mean Ct value 23.2 (±5.6) and 26.0 (±5.8) for the E and N2 genes. Patients with mild disease with tests greater than ten days from symptom onset (n=4) had mean Ct values of 26.0 (±6.5) and 27.8 (±6.8). When we stratified the patient population by disease severity, patients with severe and critical disease with tests less than ten days from symptom onset (n=24) had mean Ct values of 20.1 (±7.3) and 23.4 (±7.5). Patients with severe and critical disease greater than twenty days (n=6) had Ct values of 29.0 (±5.1) and 31.1 (±5.4). Conclusion We found that Ct values increased with longer symptom duration. We currently use the CDC criteria to discontinue isolation at ten days for mild disease and twenty days for severe and critical disease. The findings of this study suggest that our current practice for duration of isolation correlates with increasing Ct values near or above the threshold for transmissible disease. Disclosures All Authors: No reported disclosures


Author(s):  
Bhavna Gupta ◽  
Vijay Adabala ◽  
Pratik Tuppad ◽  
Unni Kannan

Background: Anaesthesiologists undergo shear stress during the perioperative period, which was further increased during the COVID 19 pandemic. Many observational studies were done to find out the stress levels of the residents. Methods: This was a prospective observational cohort study of Anaesthesiology residents in a tertiary care academic institution. We have measured the minute to minute heart rate variability which can be an indirect measure of stress level with the help of wrist band MI 4 which works on the principle of PPG. Results: The difference between baseline HR and resting HR was observed to be substantial (p value 0.115 and 0.000 respectively). The percentage rise in heart rate during intubation from resting heart rate was 42.79 ± 25.54 percentage points. Conclusion: Users can use this type of ongoing information as a feedback option to increase their work efficacy. Understanding how to use these smart devices will assist us in balancing our stress-free day-to-day activities.


QJM ◽  
2021 ◽  
Vol 114 (Supplement_1) ◽  
Author(s):  
Shaimaa S Yousef ◽  
Lamyaa S Al Bagoury ◽  
Sahar A Dewedar ◽  
Sahar M Sabbour ◽  
Wagida A Anwar

Abstract Background Patient satisfaction can be considered as an indirect measure of health outcomes and quality of provided services. Objectives To compare HCV patients' satisfaction regarding care and treatment in different selected Viral Hepatitis Outpatients Clinics in Cairo. Method: The current study is a cross sectional study. It recruited 300 HCV patients from Viral Hepatitis Outpatients Clinics in University, Ministry of Health (MOH) and Insurance Hospitals (100 HCV patients from each clinic). Recruited HCV patients attended at least 2 visits to the Viral Hepatitis Clinics. They completed an interview questionnaire about socio- demographic data, history of diagnosis of HCV, onset and type of treatment and Hepatitis Patients Satisfaction Questionnaire (HPSQ). Results Mean age of HCV patients were (48.9 ±13.5), (50.4 ± 10.4), and (54.8 ± 10.9) from University, MOH and Insurance Hospitals, respectively. Females accounted for (63%) of HCV patients in University Hospital sample however, males were (54%) and (57%) in MOH and Insurance Hospitals, respectively. Most of studied HCV patients were referred by specialists; (59%), (86%) and (87%) from University, MOH, and Insurance Hospitals, respectively. A statistically significant difference was found between the 3 Clinics as regards rating the quality of received HCV services, meeting patients’ needs, coping with HCV disease, and helping patients access to specialist services (p < 0.01). The study revealed that the majority of health providers in the 3 Viral Hepatitis Clinics didn't involve HCV patients in making decisions about their treatment. Conclusion HPSQ findings identified University Hospital's patients more satisfied about their HCV treatment management than MOH and Insurance Hospitals' patients except for involvement of HCV patients in making decisions which was lacking in the 3 hospitals.


Sign in / Sign up

Export Citation Format

Share Document