scholarly journals Edible flowers commercialized in local markets of Pachuca de Soto, Hidalgo, Mexico

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Carmen Julia Figueredo-Urbina ◽  
Gonzalo D. Álvarez-Ríos ◽  
Laura Cortés Zárraga

Background: Edible flowers are important food resources due to their high content of nutrients and bioactive compounds. In Mexico these resources have been part of the diet of indigenous and mestizo, and are also important sources of income for the families that cultivate, gather and sell them. Questions: What are the species of edible flower commercialized in local markets in Pachuca de Soto, Hidalgo, Mexico? How are they prepared? What are their nutritional contents and conservation risk categories according to literature? Studied species: Agave salmiana, A. mapisaga, Aloe vera, Arbutus xalapensis, Chenopodium berlandieri subsp.nuttalliae, Cucurbita pepo ssp. pepo, C. moschata, Dasylirion acrotrichum, Erythrina americana, Euphorbia radians, Myrtillocactus geometrizans, Phaseolus coccineus, Yucca filamentosa. Study site and dates: Local markets of Pachuca de Soto, Hidalgo, Mexico. January 2019 to March 2020. Methods: Interview-purchase with sellers and direct observations in markets. Bibliographic review of the nutritional contents of the recorded species and their conservation status. Results: We recorded 13 species of edible flowers and eight preparation methods. Five species are cultivated, five are gathered from the pine-oak forest or xerophilous scrub ecosystems and three are obtained from crops and natural ecosystems. The gualumbos (Agave salmiana and A. mapisaga) are the most commercialized flowers and had the most forms of preparation (six). Seven of the species traded are placed in a conservation risk category. Conclusions: The diversity of edible flowers used, and their preparation methods exemplify the traditional knowledge of the groups that handle them and their importance as food and economic sustenance.

2020 ◽  
Vol 90 ◽  
pp. 19-31
Author(s):  
D. V. Zobkov ◽  
◽  
A. A. Poroshin ◽  
A. A. Kondashov ◽  
◽  
...  

Introduction. A mathematical model is presented for assigning protection objects to certain risk categories in the field of fire safety. The model is based on the concepts of the probability of adverse effects of fires causing harm (damage) of various extent and severity to the life or health of citizens, and the acceptable risk of harm (damage) from fires. Goals and objectives. The purpose of the study is to develop the procedure for assigning protection objects to a certain category of risk of harm (damage) based on estimates of the probability of fires with the corresponding severity consequences, to determine the acceptable level of risk of harm (damage) due to the fires, to calculate and develop numerical values of criteria for assigning objects of protection to the appropriate risk categories. Methods. The boundaries of the intervals corresponding to certain risk categories are determined by dividing the logarithmic scale of severity of adverse effects of fires into equal segments. Classification methods are used to assign objects of protection to a specific risk category. Results and discussion. Based on the level of severity of potential negative consequences of a fire, risk categories were determined for groups of protection objects that are homogeneous by type of economic activity and by functional fire hazard classes. The risk category for each individual object of protection is proposed to be determined using the so-called index of "identification of a controlled person" within a group of objects that are homogeneous by type of economic activity and class of functional fire hazard. Depending on the risk category, the periodicity of planned control and supervision measures in relation to the specific object of protection under consideration is determined, taking into account its socio-economic characteristics and the state of compliance with fire safety requirements by the controlled person. Conclusions. To develop criteria for classifying protection objects that are homogeneous in terms of economic activity and functional fire hazard classes, the probability of negative consequences of fires, that are causing harm (damage) of various extent and severity to the life or health of citizens, and the acceptable risk of causing harm (damage) as a result of fires, is used. The risk category for each individual object of protection is determined taking into account socio-economic characteristics of the object that affect the level of ensuring its fire safety, as well as the criteriaof integrity of the subordinate person that characterize the probability of non-compliance with mandatory fire safety requirements at the object of protection. Calculations are made and numerical values of criteria for assigning protection objects that are homogeneous in terms of economic activity and functional fire hazard classes to a certain category of risk are proposed. Key words: object of protection, probability of fire, acceptable level of risk, risk category, dangerous factor of fire, death and injury of people.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Leiherer ◽  
A Muendlein ◽  
C.H Saely ◽  
R Laaksonen ◽  
M Laaperi ◽  
...  

Abstract   The Coronary Event Risk Test (CERT) is a validated cardiovascular risk predictor that uses circulating ceramide concentrations to allocate patients into one of four risk categories. This test has recently been updated (CERT-2), now additionally including phosphatidylcholine concentrations. The purpose of this study was to investigate the power of CERT and CERT-2 to predict cardiovascular mortality in patients with cardiovascular disease (CVD). We investigated a cohort of 999 patients with established CVD. Overall, comparing survival curves (figure) for over 12 years of follow up and the predictive power of survival models using net reclassification improvement (NRI), CERT-2 was the best predictor of cardiovascular mortality, surpassing CERT (NRI=0.456; p=0.01) and also the 2019 ESC-SCORE (NRI=0.163; p=0.04). Patients in the highest risk category of CERT as compared to the lowest category had a HR of 3.63 [2.09–6.30] for cardiovascular death; for CERT-2 the corresponding HR was 6.02 [2.47–14.64]. Among patients with T2DM (n=322), the HR for cardiovascular death was 3.00 [1.44–6.23] using CERT and 7.06 [1.64–30.50] using CERT-2; the corresponding HRs among non-diabetic subjects were 2.99 [1.20–7.46] and 3.43 [1.03–11.43], respectively. We conclude that both, CERT and CERT-2 scores are powerful predictors of cardiovascular mortality in CVD patients, especially in those patients with T2D. Performance is even higher with CERT-2. Funding Acknowledgement Type of funding source: None


2016 ◽  
Vol 79 (3) ◽  
pp. 501-506 ◽  
Author(s):  
DIOGO THIMOTEO da CUNHA ◽  
VERIDIANA VERA de ROSSO ◽  
ELKE STEDEFELDT

ABSTRACT The objective of this study was to verify the characteristics of food safety inspections, considering risk categories and binary scores. A cross-sectional study was performed with 439 restaurants in 43 Brazilian cities. A food safety checklist with 177 items was applied to the food service establishments. These items were classified into four groups (R1 to R4) according to the main factors that can cause outbreaks involving food: R1, time and temperature aspects; R2, direct contamination; R3, water conditions and raw material; and R4, indirect contamination (i.e., structures and buildings). A score adjusted for 100 was calculated for the overall violation score and the violation score for each risk category. The average violation score (standard deviation) was 18.9% (16.0), with an amplitude of 0.0 to 76.7%. Restaurants with a low overall violation score (approximately 20%) presented a high number of violations from the R1 and R2 groups, representing the most risky violations. Practical solutions to minimize this evaluation bias were discussed. Food safety evaluation should use weighted scores and be risk-based. However, some precautions must be taken by researchers, health inspectors, and health surveillance departments to develop an adequate and reliable instrument.


Author(s):  
Stephane Tshitenge ◽  
Adewale Ganiyu ◽  
Deogratias Mbuka ◽  
Joseph M. Shama

Aim: The present study aimed: (1) to evaluate the proportion of each diabetic foot (DF) risk category, according to the International Working Group on the Diabetic Foot (IWGDF) consensus, in patients attending the diabetic clinic in Selebi Phikwe Government Hospital (SPGH) and (2) to examine some of the factors that may be associated with the progression to higher risk categories such as anthropometric measurements, blood pressure, glycosylated haemoglobin (HbA1c) and lipid profile.Methods: A retrospective, cross sectional chart review of patients who had attended the diabetic clinic in SPGH from January 2013 to December 2013 was performed. Patients were included if they had undergone a foot examination. Patients with amputation due to accident were excluded. The DF risk category was assessed by determining the proportion of patients in each of four risk categories, as described by the IWGDF consensus.Results: The study encompassed 144 records from patients reviewed for foot examination from January to December 2013. Patients’ ages were between 16 and 85 years, 46 (40%) were male and 98 (60%) were female. The majority (122, [85%]) of patients were in DF risk category 0, whilst a limited number of patients were classified in risk category 1 (10, [6.9%]), risk category 2 (7, [4.9%]) and risk category 3 (5, [3.5%]). Most of the patients had the type 2 diabetes mellitus (139, [97%; 95% CI 92% − 99%]). Patients’ ages were associated with the progressively higher DF risk categories. The adjusted odd ratio was 1.1 (95% CI 1.03−1.14; p = 0.004).Conclusion: The present study revealed that about 15% of patients attending the SPGH diabetic clinic were categorised in higher risk groups for diabetic foot; patients’ ages were linked to the higher DF risk categories.


Circulation ◽  
2014 ◽  
Vol 129 (suppl_1) ◽  
Author(s):  
Monika M Safford ◽  
Paul Muntner ◽  
Raegan Durant ◽  
Stephen Glasser ◽  
Christopher Gamboa ◽  
...  

Introduction: To identify potential targets for eliminating disparities in cardiovascular disease outcomes, we examined race-sex differences in awareness, treatment and control of hyperlipidemia in the REGARDS cohort. Methods: REGARDS recruited 30,239 blacks and whites aged ≥45 residing in the 48 continental US between 2003-7. Baseline data were collected via telephone interviews followed by in-home visits. We categorized participants into coronary heart disease (CHD) risk groups (CHD or risk equivalent [highest risk]; Framingham Coronary Risk Score [FRS] >20%; FRS 10-20%; FRS <10%) following the 3 rd Adult Treatment Panel. Prevalence, awareness, treatment and control of hyperlipidemia were described across risk categories and race-sex groups. Multivariable models examined associations for hyperlipidemia awareness, treatment and control between race-sex groups compared with white men, adjusting for predisposing, enabling and need factors. Results: There were 11,677 individuals at highest risk, 847 with FRS >20%, 5791 with FRS 10-20%, and 10,900 with FRS<10%; 43% of white men, 29% of white women, 49% of black men and 43% of black women were in the highest risk category. More high risk whites than blacks were aware of their hyperlipidemia but treatment was 10-17% less common and control was 5-49% less common among race-sex groups compared with white men across risk categories. After multivariable adjustment, all race-sex groups relative to white men were significantly less likely to be treated or controlled, with the greatest differences for black women vs. white men (Table). Results were similar when stratified on CHD risk and area-level poverty tertile. Conclusion: Compared to white men at similar CHD risk, fewer white women, black men and especially black women who were aware of their hyperlipidemia were treated and when treated, they were less likely to achieve control, even after adjusting for factors that influence health services utilization.


2020 ◽  
pp. 26-35
Author(s):  
Денис Валерьевич Зобков ◽  
Александр Алексеевич Порошин ◽  
Андрей Александрович Кондашов ◽  
Евгений Васильевич Бобринев ◽  
Елена Юрьевна Удавцова

Проанализирован международный опыт реформирования проверок соблюдения требований пожарной безопасности и внедрения риск-ориентированного подхода. Разработана модель отнесения объектов защиты к категориям риска в зависимости от вероятного причинения вреда, который рассчитывается исходя из количества погибших и травмированных при пожарах людей. Сформулированы критерии отнесения объектов защиты к категориям риска. Выполнен расчет категорий риска для групп объектов, однородных по группам экономической деятельности и классам функциональной пожарной опасности. Проведено сравнение с существующей классификацией объектов защиты по категориям риска. The international experience of reforming of fire safety compliance checks and implementing a risk-based approach is considered. There are presented methodological approaches to calculating the risk of causing harm (damage) in buildings (structures) as a result of fire for the purpose of assignment of buildings and structures according to risk categories as well as justification of the frequency of scheduled inspections at these facilities. There is calculated the probability of fire occurrence for a group of objects of protection that are homogeneous by type of economic activity and functional fire hazard classes in order to assign objects of protection to certain risk categories. The social damage expressed in the death and injury of people as a result of fire is also calculated in order to assign objects of protection to certain risk categories. Classification of objects of protection according to the risk categories is performed using the indicator of the severity of potential negative consequences of fires. This indicator characterizes the degree of excess of the expected risk of negative consequences of fires for the corresponding group of objects of protection in relation to the value of the permissible risk of negative consequences of fire. The permissible risk of negative consequences of fires is calculated on the basis of statistical data, taking into account the value of the individual fire risk of exposure of critical values of fire hazards on person in buildings and structures. The criteria for assigning groups of objects of protection to the appropriate risk categories are formulated on the basis of formation of distribution of numerical values of the severity of potential negative consequences of fires. There are carried out the assessment of the severity of potential negative consequences of fires for objects of protection that are homogeneous by type of economic activity and functional fire hazard classes, and also the risk categories of the corresponding groups of objects are determined. The proposed classification of objects of protection according to risk categories is compared with the existing classification. The obtained results of calculations showed that scheduled inspections of objects of protection by the Federal state supervision bodies, depending on the assigned risk category and with corresponding frequency, have significant role in improving the level of fire safety of objects. The decrease in the intensity of scheduled inspections, at the same time, may lead to a corresponding decrease in the level of fire protection of objects.


Check List ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 1833 ◽  
Author(s):  
Osvaldo Eric Ramírez-Bravo ◽  
Lorna Hernandez-Santin

The Nearctic and Neotropical realms converge in central Mexico, where many areas have not been adequately characterized. Our objective was to revise the distribution and conservation status of carnivores in the state of Puebla, central Mexico. Between September 2008 and January 2011, we conducted interviews and fieldwork on seven previously selected areas. We complemented our data with bibliographical research. We obtained 733 records for 21 species, representing 63% of the carnivores reported for Mexico. We expanded known ranges of three species: Ocelot (Leopardus pardalis), Bobcat (Lynx rufus), and Tropical Ringtail (Bassariscus sumichastrii). Fifty percent of the carnivore species we recorded in Puebla are considered under some risk category. We found that carnivores in our study area are vulnerable to hunting pressure, human-carnivore conflicts that result in lethal control practices, and extensive habitat loss.


Blood ◽  
2013 ◽  
Vol 122 (21) ◽  
pp. 1544-1544 ◽  
Author(s):  
Michael Pfeilstöcker ◽  
Heinz Tüchler ◽  
Julie Schanz ◽  
Guillermo Sanz ◽  
Guillermo Garcia Manero ◽  
...  

Abstract Introduction New, refined prognostic scoring systems have been established for MDS. Most scores assess prognosis at time of diagnosis assuming stable prediction over time. Earlier studies have shown moderate loss of prognostic power over time in scores using clinical parameters whereas cytogenetic scores maintained prognostic power, scores including comorbidity had shown gain of prognostic power (Pfeilstöcker et al, 2012). The aim of this multicenter retrospective study was to assess the relative stability of the newly developed scoring systems over time, to compare and explain observed time-related losses of prognostic power, and to discuss their clinical implications. Methods This study is based on 7212 untreated (no disease modifying treatment) MDS patients from multiple institutional databases of the IWG-PM, which generated the IPSS-R (Greenberg et al, 2012). Patient characteristics were well comparable with other populations: median age 71 years, male gender 60 %, median overall survival 3.8 years (range 3.7-4.0), median time to AML transformation not reached with 25% of patients transforming to AML after 6.8 years. Patients were diagnosed and classified by FAB and WHO; cytogenetics were classified by original IPSS subtypes and by the recently refined proposal integrated into the IPSS-R (Schanz et al, 2012). The following scores were analysed for their stability over time: IPSS, IPSS-R, WPSS variants, cytogenetic scores, age, performance status and other differentiating features of the IPSS-R. Time variations were described by the Cox-zph-test, and by applying Dxy, a measure of concordance, for censored data at separate observation periods. Results In line with previous observations, loss of prognostic power occurred over time after diagnosis in all scoring systems. While for the entire population the risk between adjacent IPSS-R risk categories differs by ∼80%, for patients observed at least 1 year the increase is ∼66%, and for those observed 4 years it is only ∼25%. The IPSS-R and particularly its age-including version (the IPSS-RA) retained the highest prognostic values compared to all other scoring systems at all time points. Dxy for IPSS-R: at diagnosis 0.43, 1 year 0.35, 2 years 0.27, 4 years 0.14. Including age, as in the IPSS-RA, was associated with less loss of prognostic power over time: Dxy at diagnosis 0.46, 1 year 0.38, 2 years 0.31, 4 years 0.22. For the IPSS and WPSS (available for the latter in only 33% cases), these values were: 0.37, 0.30, 0.22, 0.11 and 0.44, 0.36, 0.29, 0.18 respectively. Considering risk categories, the risk remained fairly constant over time for the lower risk categories in every analyzed scoring system, while the risks in the higher risk categories were especially high in the second half of the first year after diagnosis, diminishing thereafter, thus reducing the prognostic value of these categories over time. To determine whether statistical weights optimized for each time period would alter these results, time-specific weights were applied, which did not demonstrate substantially different prognostic values from the basic model analysis. Particularly good retention of prognostic power was found in the lower risk categories over time. The lesser retention of prognostic power in the higher risk categories appeared related to loss of a larger portion of these patients over time due to their deaths or being censored by their beginning treatment. For the IPSS-R intermediate risk category patients, the prognosis for survival approached the “high” category ∼3 years after diagnosis, while it remained intermediate regarding their risk of AML transformation. Conclusions These data demonstrate that a degree of attrition of prognostic value occurred over time from diagnosis for all of the assessed MDS prognostic scoring systems. The IPSS-R, particularly the age-inclusive IPSS-RA, best retained such prognostic capability over time for the untreated patients analyzed. Disclosures: No relevant conflicts of interest to declare.


Blood ◽  
2015 ◽  
Vol 126 (23) ◽  
pp. 1672-1672
Author(s):  
Meritxell Nomdedeu ◽  
Xavier Calvo ◽  
Dolors Costa ◽  
Montserrat Arnan ◽  
Helena Pomares ◽  
...  

Abstract Introduction: The MDS are a group of clonal hematopoietic disorders characterized by blood cytopenias and increased risk of transformation into acute myeloid leukemia (AML). The MDS predominate in old people (median age at diagnosis > 70 years) so that a fraction of the observed mortality would be driven by age-related factors shared with the general population rather than the MDS. Distinguishing between the MDS-related and unrelated mortality rates will help better assessment of the population health impact of the MDS and more accurate prognostication. This study was aimed at quantifying the MDS-attributable mortality and its relationship with the IPSSR risk categories. Methods: The database of the GESMD was queried for patients diagnosed with primary MDS after 1980 according to the WHO 2001 classification. Patients with CMML, younger than 16 years or who lacked the basic demographic or follow-up data were excluded. Relative survival and MDS-attributable mortality were calculated by the cohort method and statistically compared by Poisson multivariate regression as described by Dickman (Stat Med 2004; 23: 51). Three main parameters were calculated: the observed (all-cause) mortality, the MDS-attributable mortality (both as percentage of the initial cohort), and the fraction of the observed mortality attributed to the MDS. Results: In total, 7408 patients met the inclusion criteria and constitute the basis for this study. Among these patients, 5307 had enough data to be classified according to the IPSSR. Median age was 74 (IQR: 16-99) years and 58 % were males. The most frequent WHO categories were RAEB, type I or II (29% of cases), RCMD (28%), and RA with ring sideroblasts (16%). Most patients (72%) were classified within the very low and low risk categories of the IPSSR. At the study closing date (December 2014), 1022 patients had progressed to AML, 3198 had died (974 after AML) and 3210 were censored alive. The median actuarial survival for the whole series was 4.8 (95% CI: 4.6-5.1) years and 30% of patients are projected to survive longer than 10 years. The overall MDS-attributable mortality at 5 years from diagnosis was 39%, which accounted for three-quarters of the observed mortality (51%, figure). The corresponding figures at 10 years for the MDS-attributable and observed mortality were 55% and 71%, respectively. According to the IPSSR, the 5-year MDS-attributable mortality rates was 19% for the very low risk category, 39% (low risk), 70% (intermediate risk), 78% (high risk), and 92% (very high risk). On average, the incidence rate ratio for the MDS-attributable mortality increased 1.9 times (95% CI: 1.7-2.3, p<0.001) as the IPSSR worsened from one to the next risk category. The fraction of the observed mortality attributed to the MDS was 0.55 for the very low risk category, 0.79 (low risk), 0.93 (intermediate risk), 0.96 (high risk), and 0.99 (very high risk). After distinguishing between AML-related and unrelated mortality, the 5-year MDS-attributable mortality not related to AML was 10% for the very low risk category, 20% (low risk), 33% (intermediate risk), 42% (high risk), and 44% (very high risk). By comparing these figures with the above ones, we could estimate that about 50% of the MDS-attributable mortality was AML-unrelated and that such fraction kept nearly constant across the five IPSSR categories. Conclusions: About three-quarters of the mortality observed in patients with MDS is caused by the disease, the remaining one-quarter being due to MDS-independent factors shared with the general population. The MDS-attributable mortality increases with the IPSSR risk category, from half the observed mortality in the very low risk to nearly all the mortality observed in the high and very high risk groups. Half the MDS-attributable mortality is driven by factors unrelated to leukemic transformation, a proportion that keeps constant across the five IPSSR risk categories. Disclosures Valcarcel: AMGEN: Honoraria, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; NOVARTIS: Honoraria, Membership on an entity's Board of Directors or advisory committees; GSK: Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; CELGENE: Honoraria, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau. Ramos:AMGEN: Consultancy, Honoraria; NOVARTIS: Consultancy, Honoraria; JANSSEN: Honoraria, Membership on an entity's Board of Directors or advisory committees; CELGENE: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding. Esteve:Celgene: Consultancy, Honoraria; Janssen: Consultancy, Honoraria.


Author(s):  
Brian P. Quinn ◽  
Mary Yeh ◽  
Kimberlee Gauvreau ◽  
Fatima Ali ◽  
David Balzer ◽  
...  

Background Advancements in the field, including novel procedures and multiple interventions, require an updated approach to accurately assess patient risk. This study aims to modernize patient hemodynamic and procedural risk classification through the creation of risk assessment tools to be used in congenital cardiac catheterization. Methods and Results Data were collected for all cases performed at sites participating in the C3PO (Congenital Cardiac Catheterization Project on Outcomes) multicenter registry. Between January 2014 and December 2017, 23 119 cases were recorded in 13 participating institutions, of which 88% of patients were <18 years of age and 25% <1 year of age; a high‐severity adverse event occurred in 1193 (5.2%). Case types were defined by procedure(s) performed and grouped on the basis of association with the outcome, high‐severity adverse event. Thirty‐four unique case types were determined and stratified into 6 risk categories. Six hemodynamic indicator variables were empirically assessed, and a novel hemodynamic vulnerability score was determined by the frequency of high‐severity adverse events. In a multivariable model, case‐type risk category (odds ratios for category: 0=0.46, 1=1.00, 2=1.40, 3=2.68, 4=3.64, and 5=5.25; all P ≤0.005) and hemodynamic vulnerability score (odds ratio for score: 0=1.00, 1=1.27, 2=1.89, and ≥3=2.03; all P ≤0.006) remained independent predictors of patient risk. Conclusions These case‐type risk categories and the weighted hemodynamic vulnerability score both serve as independent predictors of patient risk for high‐severity adverse events. This contemporary procedure‐type risk metric and weighted hemodynamic vulnerability score will improve our understanding of patient and procedural outcomes.


Sign in / Sign up

Export Citation Format

Share Document