Cooking Education Improves Cooking Confidence and Dietary Habits in Veterans

2019 ◽  
Vol 45 (4) ◽  
pp. 442-449 ◽  
Author(s):  
Ashley S. Dexter ◽  
Janet F. Pope ◽  
Dawn Erickson ◽  
Catherine Fontenot ◽  
Elizabeth Ollendike ◽  
...  

Purpose The purpose of the study was to evaluate a 12-week cooking education class on cooking confidence, dietary habits, weight status, and laboratory data among veterans with prediabetes and diabetes. Methods The sample for this study included 75 veterans within the Overton Brooks Veteran Affairs Medical Center who completed the 12-week class in an in-person group setting in Shreveport, Louisiana, or via Clinical Video Telehealth (CVT) in Longview, Texas. Veterans were referred to the Healthy Teaching Kitchen by their primary care provider or primary care dietitian. Enrollment in the class was on a volunteer basis. The cooking and nutrition education classes included topics such as carbohydrate counting, safety and sanitation, meal planning, and creating budget-friendly recipes. Participants completed 2 questionnaires for assessment of healthy dietary habits and confidence related to cooking. Changes in body weight, lipid panel, and hemoglobin A1C were assessed. Differences in class settings were tested via independent samples t tests. Paired samples t tests were completed to compare changes in mean laboratory results, weight, and questionnaire responses. Results Subjects lost a mean 2.91 ± 5.8 lbs ( P < .001). There was no significant difference in percent change in laboratory data and weight between subjects participating via CVT and subjects in the live class. Overall, there was significant improvement in the confidence questionnaire ratings and Healthy Habits Questionnaire responses. Conclusions Cooking and nutrition education can increase cooking confidence and dietary quality. These results provide support for the need for further research on the long-term effects of nutrition cooking education and for the benefits of using CVT software to provide education to remote facilities.

Author(s):  
Nisreen M. Abdulsalam ◽  
Najla A. Khateeb ◽  
Sarah S. Aljerbi ◽  
Waad M. Alqumayzi ◽  
Shaima S. Balubaid ◽  
...  

The World Health Organization declared coronavirus disease 2019 (COVID-19) a pandemic in March 2020. Global efforts have been made to prevent the disease from spreading through political decisions and personal behaviors, all of which rely on public awareness. The aim of our study was to examine the effect of dietary habits on weight and physical activity (PA) during the COVID-19 stay-at-home order in Jeddah, Saudi Arabia. An online questionnaire was distributed using social media (Facebook, Twitter, Instagram, and WhatsApp) and email communication. A total of 472 adults (age range, 18–59 years), over half of the study population (68.0%) being females, 55.5% being between 19 and 29 years old, 15.0%—between 30 and 39 years old, and 11.2%—older than 50 years old, participated in the study. Our results indicated that the overall body weight was slightly increased among the 50+ age group (47.2%, p > 0.05), but it highly increased among the 30–39-years-old age group (32.4%, p > 0.05) as compared to before the pandemic lockdown period. Therefore, our results show that a significant difference (p < 0.05) was found for all the assessments: weight status, physical activity patterns, hours spent on screen time, homemade meals, and changes in dietary habits before and during the full COVID-19 curfew period. This study demonstrated that changes in eating habits were commonly reported among the participants who represented the full COVID-19 curfew period and that changes in eating habits and decreased physical activity led to weight gain.


Author(s):  
Darisi S ◽  
◽  
Ahmed T ◽  
George J ◽  
Ramaiah B ◽  
...  

Aim: To assess the efficacy of Vitamin K antagonist to maintain stable INR in a tertiary care hospital. Methodology: All the patients who are on Vitamin K antagonists therapy for more than 6 months before the initiation of the study were included. Data, which include demographics, Personal history, medical history, medication history, Dietary habits, laboratory data (INR), and other relevant data, are collected. The laboratory results are further evaluated using the Rosendaal method and Time in Therapeutic Range, which was obtained which is evaluated, for assessing the use of medication, and other correlations were further made. Results and Discussion: The study showed a mean TTR of 25.638%, the mean TTR above and below the therapeutic range is 19.23% (±17.14), 55.11% (±29.64) respectively, this represents that the patients in the sample population are at higher risk of developing a new clot during the therapy with VKA, various chronic conditions such as Diabetes mellitus, the use of NSAIDs, PPI also showed a statistically significant difference on the patients TTR. Conclusion: Despite patients being therapeutically anticoagulated, based on the available data, many patients in the study population are at high risk of developing complications of anticoagulants and also the development of new clots even during the treatment, there are not many reports of TTR measurement in INDIAN population, The use of Vitamin K Antagonist comes with many limitations, many Newer Oral Anticoagulants (NOAC) can be used in patients as they are proven to be providing better control of TTR.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S62-S62
Author(s):  
Jonathan A Kendall ◽  
Jordan Colson ◽  
Lyla Saeed ◽  
Masako Mizusawa ◽  
Takeru Yamamoto

Abstract Background 1,3-β-D-glucan (BDG) is a cell wall component of fungi such as Aspergillus spp., Candida spp., and Pneumocystis jirovecii. BDG assay is used as a screening test to aid early diagnosis of invasive fungal infections (IFI) that are associated with significant morbidity and mortality in immunocompromised patients. The diagnostic performance varies depending on IFI risks among study populations, thus it is important to appropriately select patients with risk factors for IFI to optimize utilization of the BDG test. Figure 1. Figure 2. Methods An intervention to improve BDG test utilization was initiated at Truman Medical Center on November 28, 2018. The BDG test order was replaced by a BDG test request. The request was sent to the inbox of an on-call pathology team. Patient information was reviewed and the on-call pathology team called the ordering physician to discuss the case based on the approval algorithm chart. The criteria for BDG test approval were 1) immunocompromised or ICU patient, and 2) on empiric antifungal therapy, or inability to perform specific diagnostic tests such as bronchoscopy. If approved, a BDG test order was immediately processed. Retrospective chart review was conducted for 1 year pre- and post- intervention to obtain demographic, clinical, and laboratory data for 4 patient groups. Group 1 included patients who had BDG tests during pre-intervention period. Group 2 was composed of all patients who had BDG test requests during post-intervention period. Group 2a and 2b were the post-intervention patients with approved and rejected BDG test requests, respectively. Figure 3. Results The number of BDG tests performed per year decreased from 156 pre-intervention to 24 post-intervention. The number of test requests was 65 and 41 of them were rejected which led to $7,380 direct cost savings. There was no significant difference in age or the proportion of immunocompromised and ICU patients between Group 1 and 2. The test positivity rate was significantly higher in Group 2-a compared to Group 1 (45.8 % vs. 25.3%, p=0.038). There was no delay in IFI diagnosis or IFI-related mortality in patients for whom BDG test requests were rejected. Conclusion We successfully and safely implemented a diagnostic stewardship intervention for BDG testing and improved test utilization. Disclosures All Authors: No reported disclosures


2018 ◽  
Vol 5 (suppl_1) ◽  
pp. S403-S403
Author(s):  
J Patrik Hornak

Abstract Background HIV PrEP uptake remains low by primary care physicians, amongst whom increased awareness has been positively associated with its adoption. Prior studies have also revealed deficits in knowledge and comfort providing PrEP amongst internal medicine (IM) trainees. This is among the first reports of assessing PrEP uptake by IM residents; this appears to be the first examining pre- and post-instruction assessment of prescribing attitudes following a single lecture on the topic. Methods An anonymous, online survey was distributed to all IM residents at our institution to measure baseline PrEP awareness and prescribing patterns. A comprehensive PrEP lecture was formulated with assistance from infectious diseases (ID) faculty; focus was paid to addressing concerns about cost, safety, risk behavior compensation, and drug resistance. The lecture was made available electronically to those unable to attend the live session. PrEP knowledge and prescribing attitudes were measured and compared pre- and post-lecture. Fisher’s exact test was used for descriptive statistics. Results Of 97 initial surveys distributed, 41 were completed. A majority of respondents were aware of PrEP (68%). A modest number had either prescribed PrEP or referred a prospective patient to an ID specialist in the prior year (15%). The majority preferred to learn about PrEP with a dedicated didactic session (76%). Compared with baseline data, following the lecture, residents were better able to identify both the number of daily pills required (100% vs. 49%, P = 0.007) and the proper medication regimen (100% vs. 49%, P = 0.007); there was no significant difference in self-reported comfort with providing PrEP (89 vs. 65%, P = 0.25). In the post-lecture survey, nearly half reported a preference to refer a PrEP candidate to an ID specialist or PrEP clinic (43%). Conclusion These findings suggest value in providing PrEP education to IM trainees, but indicate that a single lecture may not be effective for ultimately improving its adoption by this important group of physicians. Determining the optimal method for incorporating PrEP into residency curricula deserves further study. Despite efforts to expand PrEP into the realm of primary care, many of these physicians may continue to defer management of these patients to ID/HIV clinicians. Disclosures All authors: No reported disclosures.


2002 ◽  
Vol 36 (5) ◽  
pp. 892-904 ◽  
Author(s):  
Margaret A Cording ◽  
Emily B Engelbrecht-Zadvorny ◽  
B Jill Pettit ◽  
John H Eastham ◽  
Rheta Sandoval

OBJECTIVE: To describe the development of a pharmacist-managed lipid clinic within a primary care medical clinic and review its results after approximately 12 months of operation. METHODS: A pharmacist-managed lipid clinic was developed at Naval Medical Center San Diego. Administrative background, treatment algorithm development, operational issues, clinical activities, and barriers to the clinic are discussed. For intermediate outcomes, data from patients who had at least 1 intervention by the pharmacist and 1 follow-up lipid panel were analyzed for medication use, changes in lipid parameters, and percent reaching the low-density-lipoprotein (LDL) target goal. Modified National Cholesterol Education Program — Adult Treatment Panel II guidelines were used to determine the LDL goal. RESULTS: Following approximately 12 months of operation, the clinic received 204 referrals and consisted of 146 active patients. A brief study was conducted to assess clinical outcomes. Of 115 patients who were seen in the clinic and met inclusion criteria, 57% were receiving treatment with a hydroxymethylglutaryl coenzyme A reductase inhibitor (statin) and 17% were receiving fibrates; 17% of the patients were not receiving lipid-lowering medications. Relative to baseline, LDL cholesterol concentrations decreased 20%, high-density-lipoprotein cholesterol increased 11%, and triglycerides decreased 19%. Overall, LDL goals were reached in 77% of the patients. LDL goals were attained by 63%, 79%, and 93% of patients with targets of <100, <130, and <160 mg/dL, respectively. Results are compared with other studies regarding lipid goal attainment. CONCLUSIONS: A pharmacist-managed lipid clinic can be developed and integrated into a primary care medical clinic. Pharmacists can effectively manage lipid-lowering therapy, helping to achieve LDL goals.


Author(s):  
Michael C. Andrews ◽  
Catherine Itsiopoulos

Athletes require sufficient nutrition knowledge and skills to enable appropriate selection and consumption of food and fluids to meet their health, body composition, and performance needs. This article reports the nutrition knowledge and dietary habits of male football (soccer) players in Australia. Players age 18 years and older were recruited from 1 A-League club (professional) and 4 National Premier League clubs (semiprofessional). No significant difference in general nutrition knowledge (GNK; 54.1% ± 13.4%; 56.8% ± 11.7%; M ± SD), t(71) = -0.91, p = .37, or sports nutrition knowledge (SNK; 56.9% ± 15.5%; 61.3% ± 15.9%), t(71) = -1.16, p = .25) were noted between professional (n = 29) and semiprofessional (n = 44) players. In general, players lacked knowledge in regard to food sources and types of fat. Although nutrition knowledge varied widely among players (24.6–82.8% correct responses), those who had recently studied nutrition answered significantly more items correctly than those who reported no recent formal nutrition education (62.6% ± 11.9%; 54.0% ± 11.4%), t(67) = 2.88, p = .005). Analysis of 3-day estimated food diaries revealed both professionals (n = 10) and semiprofessionals (n = 31) consumed on average less carbohydrate (3.5 ± 0.8 gC/kg; 3.9 ± 1.8 gC/kg) per day than football-specific recommendations (FIFA Medical and Assessment Research Centre [F-MARC]: 5–10 gC/kg). There was a moderate, positive correlation between SNK and carbohydrate intake (n = 41, ρ = 0.32, p = .04), indicating that players who exhibited greater SNK had higher carbohydrate intakes. On the basis of these findings, male football players in Australia would benefit from nutrition education targeting carbohydrate and fat in an attempt to improve nutrition knowledge and dietary practices.


2011 ◽  
Vol 14 (7) ◽  
pp. 1303-1311 ◽  
Author(s):  
Carla K Miller ◽  
Amy Headings ◽  
Mark Peyrot ◽  
Haikady Nagaraja

AbstractObjectiveA lower glycaemic index (GI) diet is associated with a reduction in glycosylated Hb (HbA1c) in people with diabetes. Yet, little research has been conducted to determine the effects of specific goals regarding consumption of low GI (LGI) foods on diabetes outcomes. The present study evaluated a behavioural intervention on dietary intake, weight status and HbA1c, which included a goal to consume either six or eight servings of LGI foods daily.DesignA parallel two-group design was used. Following the 5-week intervention, participants were randomly assigned to the group of six (n 15) or eight (n 20) servings of LGI foods daily and followed up for 8 weeks. Dietary intake was assessed using the mean of 4 d food records.SettingA metropolitan community in the USA.SubjectsIndividuals aged 40–65 years with type 2 diabetes of ≥1 year and HbA1c ≥ 7·0 % were eligible.ResultsThere was no significant difference between goal difficulty groups with regard to GI servings at the end of the study. However, mean consumption of LGI foods increased by 2·05 (se 0·47) and 1·65 (se 0·40) servings per 4184 kJ in the six (P < 0·001) and eight (P < 0·001) LGI serving groups, respectively. For all participants combined, there were significant decreases in mean HbA1c (−0·58 (se 0·21) %; P = 0·01), weight (−2·30 (se 0·78) kg; P = 0·01), BMI (−0·80 (se 0·29) kg/m2; P = 0·01) and waist circumference (−2·36 (se 0·81) cm; P = 0·01).ConclusionsAn intervention including a specific goal to consume six to eight servings of LGI foods daily can improve diabetes outcomes. Clinicians should help patients set specific targets for dietary change and identify ways of achieving those goals.


Author(s):  
Fatma Elsayed ◽  
Aram Alhammadi ◽  
Alanood Alahmad ◽  
Zahra Babiker ◽  
Abdelhamid Kerkadi

The prevalence of obesity has been increased in Qatar, with the transition from healthy to unhealthy dietary habits. Behavioral factors that are associated with obesity are, long-term imbalanced energy intake, high screen time, skipping breakfast and physical inactivity. Changes in body composition and percent body fat (PBF) increase the risk of non-communicable disease. This study is the first study conducted in Qatar to investigate the relationship between dietary patterns and body composition among young females at Qatar University. This cross-sectional study consisted of 766 healthy female students Qatari and non-Qatari aged from 18-26 years randomly selected from different colleges at Qatar University. A validate questionnaire was used in order to collect data about healthy and unhealthy dietary patterns. Anthropometric measurements involved body weight, height, waist-to-height ratio (WHtR), waist circumference (WC), body mass index (BMI) and body composition using “Seca285”, “Seca203” and “InbodyBiospace 720”. Dietary patterns were identified by using factor loading. Linear regression was used to estimate confidence intervals and regression coefficient. More than half of the participants had a normal weight (65.1%), whereas 22.8 % and 12.0% were overweight and obese, respectively. Fat mass, BMI and PBF were slightly increased with age, but there was no significant difference. Factor analysis identified two dietary patterns: unhealthy patterns and healthy patterns. The frequent intake of vegetables and fruits was significant among high PBF female students (p=0.045 and p=0.001, respectively). The frequent intake of fast food was higher for overweight female students but there was no significant difference (p=0.289), whereas, the frequent intake of sweetened beverages was associated with higher significant rate of normal weight among female students (p = 0.009). No significant relation was found between dietary patterns, BMI and PBF. In conclusion, body composition is not significantly associated with healthy and unhealthy eating patterns among young females.


2021 ◽  
Vol 80 (Suppl 1) ◽  
pp. 191.1-192
Author(s):  
S. Amikishiyev ◽  
M. G. Gunver ◽  
M. Bektas ◽  
S. Aghamuradov ◽  
B. Ince ◽  
...  

Background:COVID-19 runs a severe disease associated with acute respiratory distress syndrome in a subset of patients, and a hyperinflammatory response developing in the second week contributes to the worse outcome. Inflammatory features are mostly compatible with macrophage activation syndrome (MAS) observed in other viral infections despite resulting in milder changes. Early detection and treatment of MAS may be associated with a better outcome. However, available criteria for MAS associated with other causes have not been helpful.Objectives:To identify distinct features of MAS associated with COVID-19 using a large database enabling to assess of dynamic changes.Methods:PCR-confirmed hospitalized COVID-19 patients followed between March and September 2020 constituted the discovery set. Patients considered to have findings of MAS by experienced physicians and given anakinra or tocilizumab were classified as the MAS group and the remaining patients as the non-MAS group. The MAS group was then re-grouped as the cases with exact-MAS and borderline-MAS cases by the study group. Clinical and laboratory data including the Ct values of the PCR test were obtained from the database, and dynamic changes were evaluated especially for the first 14 days of the hospitalization. The second set of 162 patients followed between September-December 2020 were used as the replication group to test the preliminary criteria. In the second set, hospitalization rules were changed, and all patients required oxygen support and received dexamethasone 6mg/day or equivalent glucocorticoids. Daily changes were calculated for the laboratory items in MAS, borderline, and non-MAS groups to see the days differentiating the groups, and ROC curves and lower and upper limits (10-90%) of the selected parameters were calculated to determine the cutoff values.Results:A total of 769 PCR-confirmed hospitalized patients were analysed, and 77 of them were classified as MAS and 83 as borderline MAS patients. There was no statistically significant difference in the baseline viral loads of MAS patients compared to the non-MAS group according to the Ct values. Daily dynamic changes in the MAS group differed from the non-MAS group especially around the 6th day of hospitalization, and more than a twofold increase in ferritin and a 1.5-fold increase in D-dimer levels compared to the baseline values help to define the MAS group. Twelve items selected for the criteria are given in Table 1 below. The total score of 45 provided 79.6% sensitivity for the MAS (including borderline cases) and 81.3% specificity around days 5 and 6 in the discovery set, and a score of 60 increased the specificity to 94.9% despite a decrease in sensitivity to 40.8%. The same set provided a similar sensitivity (80.3%) in the replication, but a lower specificity (47.4-66% on days 6 to 9) due to a group of control patients with findings of MAS possibly masked by glucocorticoids.Table 1.Preliminary Criteria for Macrophage Activation Syndrome Associated with Coronavirus Disease-191.Fever (>37.0 °C)2.Ferritin concentration > 550 ng/mL3.More than 2 times increase of ferritin concentration within 7 days of disease onset4.Neutrophil count > 6000 cell/mm35.Lymphopenia < 1000 cell/mm36.Neutrophil/lymphocyte ratio > 67.D-dimer concentration > 1000 ng/ml8.More than 50% increase of D-dimer concentration within 7 days of disease onset9.CRP concetration > 50 mg/L10.LDH concentration > 300 U/L11.ALT or AST concentration > 50 U/L12.Procalcitonin concentration < 1.21 point for each positive item assessed on Days 5-7Score calculation: Total points / 12 x 100Possible MAS ≥45 and Definite MAS ≥60Conclusion:This study defined a set of preliminary criteria using the most relevant items of MAS according to the dynamic changes in the parameters in a group of COVID-19 patients. A score of 45 would be helpful to define a possible MAS group with reasonable sensitivity and specificity to start necessary treatments as early as possible.Disclosure of Interests:None declared.


2020 ◽  
Vol 79 (Suppl 1) ◽  
pp. 1580.2-1580
Author(s):  
R. De Angelis ◽  
F. Salaffi

Background:A growing evidence supports the role of microvasculopathy as a primary pathogenic event in systemic sclerosis (SSc). The most commonly used imaging technique to identify microangiopathy in SSc is high magnification videocapillaroscopy (NVC), and reduced capillary density and/or capillary loss, which is a typical feature of “scleroderma microangiopathy”, easily identified by NVC, has been associated with digital ulcers (DUs). Different approaches have been proposed to measure capillary density or capillary loss. Some of these were qualitative methods, others semi-quantitative, others only concerned a limited nailfold area, without ever evaluating the overall density, which is more suitable for quantitative estimate.Objectives:To assess the association between the extent of different values of nailfold capillary density and the presence of DUs and to identify the risk of developing DUs, based on quantitative parameters.Methods:The study involved 54 SSc selected patients (47 women and 7 men, mean age 59.5 years, 50 with limited and 4 with diffuse). The study population came from an ongoing database, that includes clinical and laboratory data of patients with definite SSc. A videocapillaroscope (VideoCap® 3.0, DS Medica, Milan, Italy) with a 200x optical probe was used. During examination, eight fingers (fingers 2–5 of each hand), 4 fields per finger, according to the standard literature were assessed. For each patient, a total of 32 images were collected, then classified as having either “normal”, “non-specific” or the “scleroderma pattern” (SP). Capillary density was defined as the number of capillaries/mm in the distal row, regardless of its shape and morphology. Avascular areas were defined by the absence of loops within a width/area extending over more than 500 microns. For each patient, the SP images were further graded with no/slight reduction of the capillary density (7-9 loops/mm) (NOR), with a well-defined reduction of capillary density (6-4 loops/mm) (RED) and with loss of capillaries (<4) plus avascular areas (AA). Then, the overall percentages were calculated (the number with SP, the number with NOR, with RED and with AA, respect to 32), thus obtaining the quantitative measures. All data were analysed using the MedCalc® version 18.6; 64-bit (MedCalc Software, Mariakerke, Belgium).Results:A total of 1728 images were analyzed. Patients with DUs were 16/54 (29.6%). All patients had a SP, but only five patients showed a SP along the entire nailfold. A comparison between patients with or without DUs showed a significant difference both for the overall extent of AA (p=0.032), and particularly for the overall extent of RED (p<0.001). No significant difference was found regarding the overall extent of the SP (p=0.085). Factor significantly associated with DUs in multivariate analysis was the overall extent of RED (p=0.0286). The ROC curve was very effective at discriminating the capillary feature able to distinguish patients with DUs from patients without DUs. The discriminatory power of the overall extent of RED was very good, with an AUC of 0.948 (95 % CI 0.852 ± 0.990). Then, we calculated the cut-off values of the overall extent of RED for presence/absence of DUs with the highest combination of sensitivity and specificity. The resulting cut-off value (Yourden index of 0.825) was >68.7 (sensitivity 92.31 %; specificity 90.24 %) with a LR+ of 9.46.Conclusion:Our data strongly support that the capillary density between 4 and 6 loops/mm is the best capillaroscopic quantitative measure associated with DUs and able to discriminate the probability of having DUs. If all SSc-specific antibodies and/or other laboratory/clinical parameters are not yet available, the overall capillary density can allow physicians to assess SSc patients easily, regarding DUs and risk for developing DUs.Disclosure of Interests:None declared


Sign in / Sign up

Export Citation Format

Share Document