scholarly journals Using Wearable Cameras to Assess Foods and Beverages Omitted in 24 Hour Dietary Recalls and a Text Entry Food Record App

Nutrients ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1806
Author(s):  
Virginia Chan ◽  
Alyse Davies ◽  
Lyndal Wellard-Cole ◽  
Silvia Lu ◽  
Hoi Ng ◽  
...  

Technology-enhanced methods of dietary assessment may still face common limitations of self-report. This study aimed to assess foods and beverages omitted when both a 24 h recall and a smartphone app were used to assess dietary intake compared with camera images. For three consecutive days, young adults (18–30 years) wore an Autographer camera that took point-of-view images every 30 seconds. Over the same period, participants reported their diet in the app and completed daily 24 h recalls. Camera images were reviewed for food and beverages, then matched to the items reported in the 24 h recall and app. ANOVA (with post hoc analysis using Tukey Honest Significant Difference) and paired t-test were conducted. Discretionary snacks were frequently omitted by both methods (p < 0.001). Water was omitted more frequently in the app than in the camera images (p < 0.001) and 24 h recall (p < 0.001). Dairy and alternatives (p = 0.001), sugar-based products (p = 0.007), savoury sauces and condiments (p < 0.001), fats and oils (p < 0.001) and alcohol (p = 0.002) were more frequently omitted in the app than in the 24 h recall. The use of traditional self-report methods of assessing diet remains problematic even with the addition of technology and finding new objective methods that are not intrusive and are of low burden to participants remains a challenge.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Suvanjaa Sivalingam ◽  
Emil List Larsen ◽  
Daniel H. van Raalte ◽  
Marcel H. A. Muskiet ◽  
Mark M. Smits ◽  
...  

AbstractGlucagon-like peptide 1 receptor agonists have shown cardioprotective effects which have been suggested to be mediated through inhibition of oxidative stress. We investigated the effect of treatment with a glucagon-like peptide 1 receptor agonist (liraglutide) on oxidative stress measured as urinary nucleic acid oxidation in persons with type 2 diabetes. Post-hoc analysis of two independent, randomised, placebo-controlled and double-blinded clinical trials. In a cross-over study where persons with type 2 diabetes and microalbuminuria (LIRALBU, n = 32) received liraglutide (1.8 mg/day) or placebo for 12 weeks in random order, separated by 4 weeks of wash-out. In a parallel-grouped study where obese persons with type 2 diabetes (SAFEGUARD, n = 56) received liraglutide (1.8 mg/day), sitagliptin (100 mg/day) or placebo for 12 weeks. Endpoints were changes in the urinary markers of DNA oxidation (8-oxo-7,8-dihydro-2′-deoxyguanosine (8-oxodG)) and RNA oxidation [8-oxo-7,8-dihydroguanosine (8-oxoGuo)]. In LIRALBU, we observed no significant differences between treatment periods in urinary excretion of 8-oxodG [0.028 (standard error (SE): 0.17] nmol/mmol creatinine, p = 0.87) or of 8-oxoGuo [0.12 (0.12) nmol/mmol creatinine, p = 0.31]. In SAFEGUARD, excretion of 8-oxodG was not changed in the liraglutide group [2.8 (− 8.51; 15.49) %, p = 0.62] but a significant decline was demonstrated in the placebo group [12.6 (− 21.3; 3.1) %, p = 0.02], resulting in a relative increase in the liraglutide group compared to placebo (0.16 nmol/mmol creatinine, SE 0.07, p = 0.02). Treatment with sitagliptin compared to placebo demonstrated no significant difference (0.07 (0.07) nmol/mmol creatinine, p = 0.34). Nor were any significant differences for urinary excretion of 8-oxoGuo liraglutide vs placebo [0.09 (SE: 0.07) nmol/mmol creatinine, p = 0.19] or sitagliptin vs placebo [0.07 (SE: 0.07) nmol/mmol creatinine, p = 0.35] observed. This post-hoc analysis could not demonstrate a beneficial effect of 12 weeks of treatment with liraglutide or sitagliptin on oxidatively generated modifications of nucleic acid in persons with type 2 diabetes.


2016 ◽  
Vol 25 (2) ◽  
pp. 93-7 ◽  
Author(s):  
I B Rangga Wibhuti ◽  
Amiliana M. Soesanto ◽  
Fahmi Shahab

Background: Prior studies have compared the E/e’ elevation in preeclampsia patients to normal patients, however there are no data whether this elevation persists after birth. The aim of this study is to analyze diastolic function in preeclampsia patients during pre- and post-partum period using E/e’ parameter measurement.Methods: This is a prospective cohort study of pregnant women with preeclampsia who were hospitalized and planned for pregnancy termination. Basic clinical characteristics were obtained from all samples. Echocardiography was done prepartum, 48-72 hours after termination, and 40-60 days postpartum. Post hoc analysis using least significant difference method was used to compare the results between measurements.Results: 30 subjects were enrolled in the study. Analysis on E/e’ characteristics showed statistical difference between prepartum E/e’ and 40 days postpartum E/e’ (11.87±3.184 vs 9.43±2.529, p=0.001, CI=1.123-3.751), as well as between 48 hours post-partum and 40 days post-partum period (12.12±2.754 vs 9.43±2.529, p<0.001, CI=1.615-3.771). There were no statistical differences between pre-partum E/e’ and 48 hours post-partum E/e’ (11.87±3.184 vs 12.12±2.754, p=0.633, CI=-1.345-0.832).Conclusion: This study showed diastolic dysfunction in preeclampsia patients persists up until a few days after birth, but resolves in time (40 days after birth) as measured by tissue doppler imaging.


10.2196/14760 ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. e14760
Author(s):  
Hyunggu Jung ◽  
George Demiris ◽  
Peter Tarczy-Hornoch ◽  
Mark Zachry

Background More than 1 in 4 people in the United States aged 65 years and older have type 2 diabetes. For diabetes care, medical nutrition therapy is recommended as a clinically effective intervention. Previous researchers have developed and validated dietary assessment methods using images of food items to improve the accuracy of self-reporting over traditional methods. Nevertheless, little is known about the usability of image-assisted dietary assessment methods for older adults with diabetes. Objective The aims of this study were (1) to create a food record app for dietary assessments (FRADA) that would support image-assisted dietary assessments, and (2) to evaluate the usability of FRADA for older adults with diabetes. Methods For the development of FRADA, we identified design principles that address the needs of older adults and implemented three fundamental tasks required for image-assisted dietary assessments: capturing, viewing, and transmitting images of food based on the design principles. For the usability assessment of FRADA, older adults aged 65 to 80 years (11 females and 3 males) were assigned to interact with FRADA in a lab-based setting. Participants’ opinions of FRADA and its usability were determined by a follow-up survey and interview. As an evaluation indicator of usability, the responses to the survey, including an after-scenario questionnaire, were analyzed. Qualitative data from the interviews confirmed the responses to the survey. Results We developed a smartphone app that enables older adults with diabetes to capture, view, and transmit images of food items they consumed. The findings of this study showed that FRADA and its instructions for capturing, viewing, and transmitting images of food items were usable for older adults with diabetes. The survey showed that participants found FRADA easy to use and would consider using FRADA daily. The analysis of the qualitative data from interviews revealed multiple categories, such as the usability of FRADA, potential benefits of using FRADA, potential features to be added to FRADA, and concerns of older adults with diabetes regarding interactions with FRADA. Conclusions This study demonstrates in a lab-based setting not only the usability of FRADA by older adults with diabetes but also potential opportunities using FRADA in real-world settings. The findings suggest implications for creating a smartphone app for an image-assisted dietary assessment. Future work still remains to evaluate the feasibility and validity of FRADA with multiple stakeholders, including older adults with diabetes and dietitians.


2021 ◽  
Vol 11 (1) ◽  
pp. 11
Author(s):  
Cüneyt Taşkın ◽  
Umut Canlı

School climate, which is the sum of behaviors in a school, is also defined as the character of the school. A school’s climate has a significant impact on the quality of education, and on student success or failure. From this point of view, this study aims to examine the school climate from the perspectives of physical education and sports teacher candidates. To this end, the "School Climate Scale for University Students", developed by Ali R. Terzi, was applied to 303 physical education and sports teaching department students with three sub-factors. The data obtained were first subjected to a structure analysis and then to the reliability validity test, and the validity of the scale was determined. According to the type of variables, independent groups t-tests, one-way analysis of variance tests, post hoc tests, or effect size (Eta-square) tests were applied. While the answers given by the teacher candidates did not differ according to gender, a significant difference was found according to the grade they were studying in (in favor of first and fourth year students).


2019 ◽  
Vol 33 (05) ◽  
pp. 481-485 ◽  
Author(s):  
Anthony V. Christiano ◽  
Christian A. Pean ◽  
David N. Kugelman ◽  
Sanjit R. Konda ◽  
Kenneth A. Egol

AbstractThe purpose of this study is to determine when functional outcome no longer improves following tibial plateau fracture. A patient series of operatively treated tibial plateau fractures was reviewed. Patients were evaluated using the short musculoskeletal function assessment (SMFA), range of motion (ROM) assessment, and pain levels at visual analog scale (VAS) at 3, 6, and 12 months postoperatively. Fractures were classified by the Schatzker's classification using preoperative imaging. The case series was divided into two groups based on fracture patterns. Friedman's tests were conducted to determine if there were differences in SMFA, ROM, or VAS throughout the postoperative course. A total of 117 patients with tibial plateau fractures treated operatively, with complete follow-up and without complication, were identified. Seventy-seven patients (65.8%) sustained lateral tibial plateau fractures (Schatzker's I–III). Friedman's test demonstrated significant differences in SMFA (p < 0.0005) and ROM (p < 0.0005) at the three time points. Post hoc analysis demonstrated a significant difference in SMFA (p < 0.0005) and ROM (p = 0.003) between 3 and 6 months postoperatively but no significant difference in either metric between 6 and 12 months postoperatively. Friedman's test demonstrated no significant difference in VAS postoperatively (p = 0.210). Forty patients (34.2%) sustained medial or bicondylar tibial plateau fractures (Schatzker's IV–VI). Friedman's test demonstrated significant differences in SMFA (p < 0.0005) and ROM (p < 0.0005) at the three time points. Post hoc analysis demonstrated a strong trend toward significance in SMFA between 3 and 6 months postoperatively (p = 0.088), and demonstrated a significant difference between 6 and 12 months postoperatively (p = 0.013). ROM was found to be significantly different between 3 and 6 months postoperatively (p = 0.010), but no difference was found between 6 and 12 months postoperatively (p = 0.929). Friedman's test demonstrated no significant difference in VAS postoperatively (p = 0.941). In this cohort, no significant difference in function, ROM, or pain level exists between 6 and 12 months after treatment of lateral tibial plateau fractures. However, there are significant improvements in function for at least 1 year following medial or bicondylar tibial plateau fractures.


BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e042045
Author(s):  
Chandini Raina MacIntyre ◽  
Tham Chi Dung ◽  
Abrar Ahmad Chughtai ◽  
Holly Seale ◽  
Bayzidur Rahman

BackgroundIn a previous randomised controlled trial (RCT) in hospital healthcare workers (HCWs), cloth masks resulted in a higher risk of respiratory infections compared with medical masks. This was the only published RCT of cloth masks at the time of the COVID-19 pandemic.ObjectiveTo do a post hoc analysis of unpublished data on mask washing and mask contamination from the original RCT to further understand poor performance of the two-layered cotton cloth mask used by HCWs in that RCT.Setting14 secondary-level/tertiary-level hospitals in Hanoi, Vietnam.ParticipantsA subgroup of 607 HCWs aged ≥18 years working full time in selected high-risk wards, who used a two-layered cloth mask and were part of a randomised controlled clinical trial comparing medical masks and cloth masks.InterventionWashing method for cloth masks (self-washing or hospital laundry). A substudy of contamination of a sample of 15 cloth and medical masks was also conducted.Outcome measureInfection rate over 4 weeks of follow up and viral contamination of masks tested by multiplex PCR.ResultsViral contamination with rhinovirus was identified on both used medical and cloth masks. Most HCW (77% of daily washing) self-washed their masks by hand. The risk of infection was more than double among HCW self-washing their masks compared with the hospital laundry (HR 2.04 (95% CI 1.03 to 4.00); p=0.04). There was no significant difference in infection between HCW who wore cloth masks washed in the hospital laundry compared with medical masks (p=0.5).ConclusionsUsing self-reported method of washing, we showed double the risk of infection with seasonal respiratory viruses if masks were self-washed by hand by HCWs. The majority of HCWs in the study reported hand-washing their mask themselves. This could explain the poor performance of two layered cloth masks, if the self-washing was inadequate. Cloth masks washed in the hospital laundry were as protective as medical masks. Both cloth and medical masks were contaminated, but only cloth masks were reused in the study, reiterating the importance of daily washing of reusable cloth masks using proper method. A well-washed cloth mask can be as protective as a medical mask.Trial resgistration numberACTRN12610000887077.


1998 ◽  
Vol 87 (2) ◽  
pp. 575-584 ◽  
Author(s):  
Lars McNaughton ◽  
Phil Hall ◽  
Dean Cooley

The purpose of this study was to identify the most accurate predictor of VO2max from a variety of running tests. 32 young adult male undergraduates of (mean ± SE) age 20.14 ± 0.34 yr., height 179.4 ± 1.8 cm, weight 73.7±2.8 kg, and VO2max 57.89 ± 1.1 ml · kg−1 · min.−1 were randomly tested on four different predictive VO2max running tests to assess their actual VO2max, based on a continuous, progressive treadmill protocol and obtained via gas analysis. The four tests consisted of a treadmill jogging test, 1.5 mile run, Cooper's 12-min. run, and the 20-m progressive shuttle-run test. An analysis of variance applied to means indicated significance. Post hoc analysis between the means with correction by Scheffé showed significant difference between the predictive submaximal treadmill jogging test and the 12-min. run but no other difieren .es. The strength of the relationship between predictive tests and VO2max varied, wit1 the 12-min run having the highest correlation of .87, followed by the 1.5 mile rur .87, 20-m progressive shuttle run .82, and the treadmill jogging test .50. The 12-min. run had the highest correlation of all tests with VO2max in young men, with active to trained levels of fitness. The 1.5 mile and 20-m shuttle run also provided accurate predictions of VO2max and so should be used for an accurate prediction of young men's VO2max.


Blood ◽  
2009 ◽  
Vol 114 (22) ◽  
pp. 1868-1868 ◽  
Author(s):  
Michele Cavo ◽  
Sara Bringhen ◽  
Nicoletta Testoni ◽  
Paola Omedè ◽  
Giulia Marzocchi ◽  
...  

Abstract Abstract 1868 Poster Board I-893 Introduction Bortezomib was initially reported to overcome the poor prognosis related to the presence of del(13q) in patients with advanced refractory/relapsed multiple myeloma (MM). However, more recent evaluations of genomic aberrations in MM provided demonstration that only t(4;14) and del(17p) retained prognostic value for both EFS and OS, thus identifying a subgroup of patients at high risk of progression or death. The combination of bortezomib with melphalan and prednisone, actually licensed as first-line therapy for MM patients who are not eligible for autologous stem-cell transplantation (ASCT), showed comparable activities in terms of time to progression and OS among patients with or without high-risk cytogenetic profiles. However, the number of high-risk patients analyzed was very limited, due to the low frequency of these genomic abnormalities. To more carefully assess the role of bortezomib in patients with high-risk cytogenetics [(e.g. carrying t(4;14) and/or del(17p)], we performed a post-hoc analysis of two phase 3 studies of first-line bortezomib-based regimens for the treatment of a large series of MM patients. Both studies are actually conducted by the Italian Myeloma Network GIMEMA. Patients and methods The activity of three different bortezomib-based regimens in terms of achievement of best high-quality response (immunofixation negative CR) and PFS was analyzed. Regimens evaluated were bortezomib-thalidomide-dexamethasone (VTD), bortezomib-melphalan-prednisone (VMP) and bortezomib-melphalan-prednisone-thalidomide (VMPT). VTD was followed by ASCT. Treatment details are as follows: VTD (Bortezomib, 1.3 mg/m2 twice-weekly, every 21/d cycle; Thalidomide, 200 mg/d; Dexamethasone, 320 mg/cycle); VMP (Bortezomib 1.3 mg/m2 on d 1, 8, 15 and 22, every 35/d cycle; Melphalan, 9 mg/m2 on d 1 through 4, every cycle; Prednisone, 60 mg/m2 on d 1–4 of each cycle); VMPT (VMP, as previously described; Thalidomide, 50 mg/d). A total of 566 patients for whom results of interphase FISH analysis at diagnosis were available for the presence or absence of del(13q) and/or t(4;14) and/or del(17p), were included in the present study. Three cytogenetic subgroups of patients were identified, including those without genomic abnormalities (group 1; n=257), those with del(13q) alone (group 2; n=162) and those who carried t(4;14) and/or del(17p) with or without del(13q) (group 3; n=147). For the purpose of the present analysis, clinical outcomes (e.g. CR rate and PFS) of patients treated with the 3 bortezomib-based regimens were compared according to the presence or absence of different genomic aberrations (e.g. group 1 vs 3 and group 2 vs 3). Results Overall, the frequency of patients belonging to group 1 (no abnormalities), group 2 [del(13q) alone] and group 3 [t(4;14)±del(17p)] was 45%, 29% and 26%, respectively. Comparable rates of genomic aberrations were detected in patients treated with the 3 bortezomib-based regimens [no genetic abnormalities: 46% in VTD vs 48% in VMP vs 42% in VMPT; del(13q) alone: 30% in VTD vs 28% in VMP vs 28% in VMPT; t(4;14)±del(17p): 24% in VTD vs 24% in VMP vs 30% in VMPT]. No statistically significant difference in terms of CR rate was detected by comparing patients in group 3 with those in group 1 (38% vs 31.5%, respectively; P=0.1) and in group 2 (48%, P=0.07). The 2-year projected PFS was 63% for patients with high-risk cytogenetics vs 71% for those with del(13q) alone (P=0.1) vs 75% for patients without cytogenetic abnormalities (P=0.01). The finding that in the high-risk cytogenetic subgroup the VMP regimen comprising once-weekly standard-dose bortezomib effected the lowest rate of CR and PFS may explain, at least in part, the longer PFS for the subgroup without cytogenetic abnormalities. Indeed, after exclusion from the analysis of the VMP regimen, no statistically significant difference in terms of PFS was seen among VTD- and VMPT-treated patients according to the presence of high-risk cytogenetics or the absence of genomic abnormalities (P=0.09). Conclusions These results, based on a post-hoc analysis of patients with different age and treatment exposure, should be cautiously interpreted, although consistencies exist between them and previous reports on the activity of bortezomib in MM with high-risk cytogenetic abnormalities. Further analyses of large series of homogeneously treated patients are needed before firm conclusions can be drawn about the ability of bortezomib-based regimens to overcome the adverse prognosis related to t(4;14) and/or del(17p). Disclosures: Cavo: Ortho Biotech, Janssen-Cilag: Honoraria, Research Funding, Speakers Bureau; Millennium Pharmaceuticals: Honoraria; Novartis: Honoraria; Celgene: Honoraria. Boccadoro:Ortho Biotech, Janssen-Cilag: Honoraria, Speakers Bureau. Palumbo:Ortho Biotech, Janssen-Cilag: Honoraria; Celgene: Honoraria, Speakers Bureau.


1986 ◽  
Vol 3 (1) ◽  
pp. 14-21 ◽  
Author(s):  
Marilyn A. Cooper ◽  
Claudine Sherrill ◽  
David Marshall

Attitudes toward physical activity were examined in relation to sports classification (nonambulatory vs. ambulatory) and gender for elite cerebral palsied athletes and were compared to attitudes of elite Canadian able-bodied athletes (Alderman, 1970). Subjects were 165 CP adult athletes who competed in the 1983 National CP Games, Ft. Worth, Texas. Data were collected by interview on the Simon and Smoll Attitude Toward Physical Activity Scale (SATPA). SATPA answers were treated with MANOVA and ANOVA, and the Scheffé test was used for post hoc analysis. No significant difference was found among class, gender, and class-by-gender combinations in attitudes toward physical activity. Adult CP athletes have positive attitudes toward the total concept of physical activity, but are significantly less favorably disposed to physical activity as a thrill and as long and hard training than as social experience, health and fitness, beauty, and tension release.


2014 ◽  
Vol 25 (09) ◽  
pp. 893-903 ◽  
Author(s):  
Erik P. Rauterkus ◽  
Catherine V. Palmer

Background: The hearing aid effect is the term used to describe the assignment of negative attributes to individuals using hearing aids. The effect was first empirically identified in 1977 when it was reported that adults rating young children with and without hearing aids assigned negative attributes to the children depicted with hearing aids. Investigations in the 1980s and 1990s reported mixed results related to the extent of the hearing aid effect but continued to identify, on average, some negative attributes assigned to individuals wearing hearing aids. Purpose: The specific aim of this research was to investigate whether the hearing aid effect has diminished in the past several decades by replicating the methods of previous studies for testing the hearing aid effect while using updated devices. Research Design: Five device configurations were rated across eight attributes. Results for each attribute were considered separately. Study Sample: A total of 24 adults judged pictures of young men wearing various ear level technologies across 8 attributes on a 7-point Likert scale. Five young men between ages 15 and 17 yr were photographed wearing each of five device configurations including (1) a standard-sized behind-the-ear (BTE) hearing aid coupled to an earmold with #13 tubing, (2) a mini-BTE hearing aid with a slim tube open-fit configuration, (3) a completely-in-the-canal hearing aid that could not be seen because of its location in the ear canal, (4) an earbud, and (5) a Bluetooth receiver. Data Collection and Analysis: The 24 raters saw pictures of each of the 5 young men with each wearing one of the 5 devices so that devices and young men were never judged twice by the same observer. All judgments of each device, regardless of the young man modeling the device, were combined in the data analysis. The effect of device types on judgments was tested using a one-way between-participant analysis of variance. Results: There was a significant difference on the judgment of age and trustworthiness level among the five devices. However, our post hoc analysis revealed that only two significant effects were present. People wearing a completely-in-the-canal aid (nothing visible in the ear) were rated significantly older than people wearing an earbud, and people wearing the standard-size BTE with earmold were rated significantly more trustworthy than people who wore the Bluetooth device. Conclusions: It was hypothesized that the hearing aid effect would be diminished in 2013 compared with data reported in the past. This proved to be the case, as no hearing aid condition was rated as more negative than any of the non–hearing aid device conditions. In fact, models wearing the standard-size BTE with earmold were rated as more trustworthy than models wearing the Bluetooth device. The standard-sized BTE with earmold condition is the configuration that can be directly compared with previous research because similar devices were used in those studies. These results indicate that the hearing aid effect has diminished, if not completely disappeared, in the 21st century.


Sign in / Sign up

Export Citation Format

Share Document