scholarly journals Network Analysis for Projects with High Risk Levels in Uncertain Environments

2022 ◽  
Vol 70 (1) ◽  
pp. 1281-1296
Author(s):  
Mohamed Abdel-Basset ◽  
Asmaa Atef ◽  
Mohamed Abouhawwash ◽  
Yunyoung Nam ◽  
Nabil M. AbdelAziz
Water ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 1804
Author(s):  
Cassi J. Gibson ◽  
Abraham K. Maritim ◽  
Jason W. Marion

Quantitatively assessing fecal indicator bacteria in drinking water from limited resource settings (e.g., disasters, remote areas) can inform public health strategies for reducing waterborne illnesses. This study aimed to compare two common approaches for quantifying Escherichia coli (E. coli) density in natural water versus the ColiPlate™ kit approach. For comparing methods, 41 field samples from natural water sources in Kentucky (USA) were collected. E. coli densities were then determined by (1) membrane filtration in conjunction with modified membrane-thermotolerant E. coli (mTEC) agar, (2) Idexx Quanti-Tray® 2000 with the Colilert® substrate, and (3) the Bluewater Biosciences ColiPlate kit. Significant correlations were observed between E. coli density data for all three methods (p < 0.001). Paired t-test results showed no difference in E. coli densities determined by all the methods (p > 0.05). Upon assigning modified mTEC as the reference method for determining the World Health Organization-assigned “very high-risk” levels of fecal contamination (> 100 E. coli CFU/100 mL), both ColiPlate and Colilert exhibited excellent discrimination for screening very high-risk levels according to the area under the receiver operating characteristic curve (~89%). These data suggest ColiPlate continues to be an effective monitoring tool for quantifying E. coli density and characterizing fecal contamination risks from water.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yuanyuan Chen ◽  
Dongru Chen ◽  
Huancai Lin

Abstract Background Infiltration and sealing are micro-invasive treatments for arresting proximal non-cavitated caries lesions; however, their efficacies under different conditions remain unknown. This systematic review and meta-analysis aimed to evaluate the caries-arresting effectiveness of infiltration and sealing and to further analyse their efficacies across different dentition types and caries risk levels. Methods Six electronic databases were searched for published literature, and references were manually searched. Split-mouth randomised controlled trials (RCTs) to compare the effectiveness between infiltration/sealing and non-invasive treatments in proximal lesions were included. The primary outcome was obtained from radiographical readings. Results In total, 1033 citations were identified, and 17 RCTs (22 articles) were included. Infiltration and sealing reduced the odds of lesion progression (infiltration vs. non-invasive: OR = 0.21, 95% CI 0.15–0.30; sealing vs. placebo: OR = 0.27, 95% CI 0.18–0.42). For both the primary and permanent dentitions, infiltration and sealing were more effective than non-invasive treatments (primary dentition: OR = 0.30, 95% CI 0.20–0.45; permanent dentition: OR = 0.20, 95% CI 0.14–0.28). The overall effects of infiltration and sealing were significantly different from the control effects based on different caries risk levels (OR = 0.20, 95% CI 0.14–0.28). Except for caries risk at moderate levels (moderate risk: OR = 0.32, 95% CI 0.01–8.27), there were significant differences between micro-invasive and non-invasive treatments (low risk: OR = 0.24, 95% CI 0.08–0.72; low to moderate risk: OR = 0.38, 95% CI 0.18–0.81; moderate to high risk: OR = 0.17, 95% CI 0.10–0.29; and high risk: OR = 0.14, 95% CI 0.07–0.28). Except for caries risk at moderate levels (moderate risk: OR = 0.32, 95% CI 0.01–8.27), infiltration was superior (low risk: OR = 0.24, 95% CI 0.08–0.72; low to moderate risk: OR = 0.38, 95% CI 0.18–0.81; moderate to high risk: OR = 0.20, 95% CI 0.10–0.39; and high risk: OR = 0.14, 95% CI 0.05–0.37). Conclusion Infiltration and sealing were more efficacious than non-invasive treatments for halting non-cavitated proximal lesions.


2021 ◽  
pp. 106002802110447
Author(s):  
Haley M. Gonzales ◽  
James N. Fleming ◽  
Mulugeta Gebregziabher ◽  
Maria Aurora Posadas Salas ◽  
John W. McGillicuddy ◽  
...  

Background Medication safety issues have detrimental implications on long-term outcomes in the high-risk kidney transplant (KTX) population. Medication errors, adverse drug events, and medication nonadherence are important and modifiable mechanisms of graft loss. Objective To describe the frequency and types of interventions made during a pharmacist-led, mobile health–based intervention in KTX recipients and the impact on patient risk levels. Methods This was a secondary analysis of data collected during a 12-month, parallel-arm, 1:1 randomized clinical controlled trial including 136 KTX recipients. Participants were randomized to receive either usual care or supplemental, pharmacist-driven medication therapy monitoring and management using a smartphone-enabled app integrated with telemonitoring of blood pressure and glucose (when applicable) and risk-based televisits. The primary outcome was pharmacist intervention type. Secondary outcomes included frequency of interventions and changes in risk levels. Results A total of 68 patients were randomized to the intervention and included in this analysis. The mean age at baseline was 50.2 years; 51.5% of participants were male, and 58.8% were black. Primary pharmacist intervention types were medication reconciliation and patient education, followed by medication changes. Medication reconciliation remained high throughout the study period, whereas education and medication changes trended downward. From baseline to month 12, we observed an approximately 15% decrease in high-risk patients and a corresponding 15% increase in medium- or low-risk patients. Conclusion and Relevance A pharmacist-led mHealth intervention may enhance opportunities for pharmacological and nonpharmacological interventions and mitigate risk levels in KTX recipients.


2020 ◽  
pp. 070674372096171
Author(s):  
Srividya N. Iyer ◽  
Sally S. Mustafa ◽  
Laura Moro ◽  
G. Eric Jarvis ◽  
Ridha Joober ◽  
...  

Objective: We aimed to investigate whether individuals with first-episode psychosis (FEP) receiving extended early intervention (EI) were less likely to experience suicidal ideation and behaviors than those transferred to regular care after 2 years of EI. Another objective was to examine the 5-year course of suicidality in FEP. Methods: We conducted a secondary analysis of a randomized controlled trial where 220 patients were randomized after 2 years of EI to receive extended EI or regular care for the subsequent 3 years. Suicidality was rated using the Brief Psychiatric Rating Scale. Linear mixed model analysis was used to study time and group effects on suicidality. Results: Extended EI and regular care groups did not differ on suicidality. There was a small decrease in suicidality over time, F(7, 1038) = 1.84, P = 0.077, with an immediate sharp decline within a month of treatment, followed by stability over the remaining 5 years. Patients who endorsed suicidality at entry (46.6%) had higher baseline positive, negative, and depressive symptoms. The 5-year course fell in 3 groups: never endorsed suicidality (33.9%), endorsed suicidality at low-risk levels (43.1%), and endorsed high-risk levels (23.0%). The high-risk group had a higher proportion of affective versus nonaffective psychosis diagnosis; higher baseline positive and depressive symptoms; higher 5-year mean depression scores, and fewer weeks of positive symptom remission over the 5-year course. Conclusions: The first month of treatment is a critical period for suicide risk in FEP. Although early reductions in suicidality are often maintained, our findings make the case for sustained monitoring for suicide risk management.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Lingzhi Huang ◽  
Zheng Si ◽  
Xiaoqi Du ◽  
Lifeng Wen ◽  
Bin Li

The risk of slope failure is determined by the degree of damage caused by the slope slide. For the special-high slope of some high-risk water conservancy and hydropower projects, the standard should be appropriately raised. Thus, the safety standard for these slopes is explored on the basis of reliability analysis. The slopes with high risk of failure are divided into special class I and special class II slopes depending on the risk levels and acceptable risk standards. The concept of reliability theory-based relative ratio of the safety margin is utilized to establish the relationship between annual failure probability and safety factor, thereby obtaining the reasonable safety factors for different slopes. Results show that the values of safety factors for special class I and special class II are 1.40 and 1.35, respectively. These results can provide a reference for exploring the safety standards of dams with a height of more than 200 m.


1979 ◽  
Vol 19 (3) ◽  
pp. 180-185 ◽  
Author(s):  
N. G. Flanagan ◽  
G. K. Lochridge ◽  
J.G. Henry ◽  
A. J. Hadlow ◽  
P. A. Hamer

A field study was carried out using 131 volunteers in an attempt to relate alcohol consumption at 12 social functions with actual blood alcohol levels under reasonably controlled conditions. Food, taken at 7 of these functions, caused an unpredictable delay in alcohol absorption and some subjects had blood alcohol figures approaching recently defined ‘high risk’ levels. Better correlation was found at those functions without food intake, but again there was considerable individual variation. In 36 subjects samples were taken on the following morning. About 12 per cent showed significantly raised levels but all were under the legal limit for driving. The authors are concerned that other factors in addition to the alcohol level should be considered before a driver is placed in the ‘high risk’ category.


Blood ◽  
2008 ◽  
Vol 112 (11) ◽  
pp. 874-874
Author(s):  
Fausto R Loberiza ◽  
Anthony J Cannon ◽  
Dennis D Weisenburger ◽  
Julie M. Vose ◽  
Matt J. Moehr ◽  
...  

Abstract Objectives: We evaluated the association of the primary area of residence (urban vs. rural) and treatment (trt) provider (university-based vs. community-based) with overall survival in patients with lymphoma, and determined if there are patient subgroups that could benefit from better coordination of care. Methods: We performed a population-based study in 2,330 patients with centrally confirmed lymphoma from Nebraska and surrounding states reported to the Nebraska Lymphoma Study Group between 1982 and 2006. Patient residential ZIP codes at the time to trt were used to determine rural/urban designation, household income and distance to trt center; while trt providers were categorized into university-based or community based. Multivariate analyses were used to group patients into risk levels based on 8 factors found to be associated with survival at the time of trt (age, performance score, Ann Arbor stage, presence of B symptoms, LDH levels, tumor bulk, nodal and extranodal involvement). The following categories were identified: low-risk (1–3 factors), intermediate risk (4–5 factors), and high-risk (≥6 factors). Cox proportional regression analyses, stratified by type of lymphoma (low-grade NHL, high-grade NHL and Hodgkin) were used to evaluate the association between place of residence and trt provider with overall survival. Results: Among urban residents, 321 (14%) were treated by university-based providers (UUB) and 816 (35%) were treated by community-based providers (UCB). Among rural residents, 332 (14%) were treated by university-based providers (RUB) and 861 (37%) were treated by community-based providers (RCB). Patients from rural areas were more likely to be older and Caucasian, with a lower median household income, greater travel distance to seek trt, and more likely to have high-risk disease when compared to patients from urban areas. In multivariate analysis, using all patients regardless of risk level, the relative risk of death (RR) among UUB, UCB and RUB was not statistically different. However, RCB had a higher risk of death RR 1.37, 95% CI 1.14–1.65, p=0.01; RR 1.18, 95% CI 1.04–1.33, p<0.01; and RR 1.26, 95% CI 1.06–1.49, p=0.01 when compared with UUB, UCB and RUB, respectively. This association remained true in both low- and intermediate-risk patients. Among high-risk patients, both RUB and RCB were at higher risk of death when compared with UUB or UCB, while UCB were not different from UUB. We found no differences in progression-free survival according to place of residence and trt provider. The use of stem cell transplantation was significantly higher in patients coming from urban and rural areas treated by university-based providers (UUB 19%, RUB 16%) compared to urban and rural patients treated by community-based providers (UCB 11%, RCB 10%, p < 0.01). Patients from rural areas (RUB and RCB) were slightly less likely to die from lymphoma-related causes than patients from urban areas (75% versus 80%, p=0.04). Conclusion: Overall survival in patients with lymphoma is inferior in patients coming from rural areas. This relationship varies according to treatment provider and pretreatment risk levels. Further studies in patients from rural areas are needed to understand how coordination of care is carried to design appropriate interventions that may improve the disparity noted.


2005 ◽  
Vol 162 (10) ◽  
pp. 1024-1031 ◽  
Author(s):  
R. M. Christley ◽  
G. L. Pinchbeck ◽  
R. G. Bowers ◽  
D. Clancy ◽  
N. P. French ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document