Applying International Standards in Response to Oil Spills in Remote Areas

2014 ◽  
Vol 2014 (1) ◽  
pp. 1859-1868 ◽  
Author(s):  
Kelly Reynolds

ABSTRACT It is accepted international practice that the level of effort invested in oil spill contingency planning and preparedness should be related to the best available, location-specific risk evaluations. Accordingly, high risk and/or highly sensitive areas often see greater degrees of planning and pre-incident resource allocation than low risk areas. High risk areas typically include navigational ‘choke points’ for shipping or approaches to ports; highly sensitive areas would include areas of intense coastal tourism, mari-culture, or natural resources (e.g. coral reefs or mangroves). Naturally, levels of preparedness vary between countries for a variety of reasons including availability of resources (i.e. funding) or priorities. Whilst logical, this approach to contingency planning leaves open a gap in response capacity in so far as incidents do still occur from time to time in what are normally thought of as extremely low risk areas. Good examples are the infrequent, yet still important, incidents that occur from passing vessel traffic on long-distance, inter-continental routes. Other examples are incidents from scheduled shipping routes servicing remote areas or even passenger vessels visiting remote locations such as the Arctic or the Antarctic. Because remote areas are often characterised by a general lack of infrastructure and because local authorities in remote locations typically do not have appropriate funds, training and manpower to deal with unexpected oil spill incidents, the intensity and quality of emergency response and post-incident follow-up tends to depend on the involvement of outside parties. The question arises, what is appropriate “international practice” in response operations in terms of the types/methods of work undertaken, the termination standards applied, health and safety issues, and post-incident follow-up, such as monitoring studies. The intent of this paper is to discuss the meaning of “international standards” for oil spill response in the context of remote operations. Practical examples will be drawn from remote spills world-wide, including incidents in Tristan da Cunha (in South Atlantic), Madagascar, and Papua New Guinea.

RMD Open ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e001524
Author(s):  
Nina Marijn van Leeuwen ◽  
Marc Maurits ◽  
Sophie Liem ◽  
Jacopo Ciaffi ◽  
Nina Ajmone Marsan ◽  
...  

ObjectivesTo develop a prediction model to guide annual assessment of systemic sclerosis (SSc) patients tailored in accordance to disease activity.MethodsA machine learning approach was used to develop a model that can identify patients without disease progression. SSc patients included in the prospective Leiden SSc cohort and fulfilling the ACR/EULAR 2013 criteria were included. Disease progression was defined as progression in ≥1 organ system, and/or start of immunosuppression or death. Using elastic-net-regularisation, and including 90 independent clinical variables (100% complete), we trained the model on 75% and validated it on 25% of the patients, optimising on negative predictive value (NPV) to minimise the likelihood of missing progression. Probability cutoffs were identified for low and high risk for disease progression by expert assessment.ResultsOf the 492 SSc patients (follow-up range: 2–10 years), disease progression during follow-up was observed in 52% (median time 4.9 years). Performance of the model in the test set showed an AUC-ROC of 0.66. Probability score cutoffs were defined: low risk for disease progression (<0.197, NPV:1.0; 29% of patients), intermediate risk (0.197–0.223, NPV:0.82; 27%) and high risk (>0.223, NPV:0.78; 44%). The relevant variables for the model were: previous use of cyclophosphamide or corticosteroids, start with immunosuppressive drugs, previous gastrointestinal progression, previous cardiovascular event, pulmonary arterial hypertension, modified Rodnan Skin Score, creatine kinase and diffusing capacity for carbon monoxide.ConclusionOur machine-learning-assisted model for progression enabled us to classify 29% of SSc patients as ‘low risk’. In this group, annual assessment programmes could be less extensive than indicated by international guidelines.


2021 ◽  
Vol 24 (3) ◽  
pp. 680-690
Author(s):  
Michiel C. Mommersteeg ◽  
Stella A. V. Nieuwenburg ◽  
Wouter J. den Hollander ◽  
Lisanne Holster ◽  
Caroline M. den Hoed ◽  
...  

Abstract Introduction Guidelines recommend endoscopy with biopsies to stratify patients with gastric premalignant lesions (GPL) to high and low progression risk. High-risk patients are recommended to undergo surveillance. We aimed to assess the accuracy of guideline recommendations to identify low-risk patients, who can safely be discharged from surveillance. Methods This study includes patients with GPL. Patients underwent at least two endoscopies with an interval of 1–6 years. Patients were defined ‘low risk’ if they fulfilled requirements for discharge, and ‘high risk’ if they fulfilled requirements for surveillance, according to European guidelines (MAPS-2012, updated MAPS-2019, BSG). Patients defined ‘low risk’ with progression of disease during follow-up (FU) were considered ‘misclassified’ as low risk. Results 334 patients (median age 60 years IQR11; 48.7% male) were included and followed for a median of 48 months. At baseline, 181/334 (54%) patients were defined low risk. Of these, 32.6% were ‘misclassified’, showing progression of disease during FU. If MAPS-2019 were followed, 169/334 (51%) patients were defined low risk, of which 32.5% were ‘misclassified’. If BSG were followed, 174/334 (51%) patients were defined low risk, of which 32.2% were ‘misclassified’. Seven patients developed gastric cancer (GC) or dysplasia, four patients were ‘misclassified’ based on MAPS-2012 and three on MAPS-2019 and BSG. By performing one additional endoscopy 72.9% (95% CI 62.4–83.3) of high-risk patients and all patients who developed GC or dysplasia were identified. Conclusion One-third of patients that would have been discharged from GC surveillance, appeared to be ‘misclassified’ as low risk. One additional endoscopy will reduce this risk by 70%.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


2020 ◽  
Author(s):  
Neda Firouraghi ◽  
Sayyed Mostafa Mostafavi ◽  
Amene Raouf-Rahmati ◽  
Alireza Mohammadi ◽  
Reza Saemi ◽  
...  

Abstract Background:Cutaneous leishmaniasis (CL) is an important public health concern worldwide. Iran is among the most CL-affected countries, being listed as one of the first six endemic countries in the world. In order to develop targeted interventions, we performed a spatial-time visualization of CL cases in an urban area to identify high-risk and low-risk areas during 2016-2019.Methods:This cross-sectional study was conducted in the city of Mashhad. Patient data were gathered from Mashhad health centers. All cases (n=2425) were diagnosed in two stages; the initial diagnosis was based on clinical findings. Subsequently, clinical manifestation was confirmed by parasitological tests. The data were aggregated at the neighborhood and district levels and smoothed CL incidence rates per 100,000 individuals were calculated using the spatial empirical Bayesian approach. Furthermore, we used the Anselin Local Moran’s I statistic to identify clusters and outliers of CL distribution during 2016-2019 in Mashhad. Results:The overall incidence rates decreased from 34.6 per 100,000 in 2016 to 19.9 per 100,000 individuals in 2019. Both cluster analyses by crude incidence rate and smoothed incidence rate identified high-risk areas in southwestern Mashhad over the study period. Furthermore, the analyses revealed low-risk areas in northeastern Mashhad over the same 3-year period.Conclusions:The southwestern area of Mashhad had the highest CL incidence rates. This piece of information might be of value to design tailored interventions such as running effective resource allocation models, informed control plans and implementation of efficient surveillance systems. Furthermore, this study generates new hypotheses to test potential relationships between socio-economic and environmental risk factors and incidence of CL in areas with higher associated risks.


2021 ◽  
Author(s):  
Andreas Fritsche ◽  
Robert Wagner ◽  
Martin Heni ◽  
Kostantinos Kantartzis ◽  
Jürgen Machann ◽  
...  

Lifestyle intervention (LI) can prevent type 2 diabetes, but response to LI varies depending on risk subphenotypes. We tested if prediabetic individuals with low risk benefit from conventional LI and individuals with high risk benefit from an intensification of LI in a multi-center randomized controlled intervention over 12 months with 2 years follow up. 1105 prediabetic individuals based on ADA glucose criteria were stratified into a high- and low-risk phenotype, based on previously described thresholds of insulin secretion, insulin sensitivity and liver fat content. Low-risk individuals were randomly assigned to conventional LI according to the DPP protocol or control (1:1), high-risk individuals to conventional or intensified LI with doubling of required exercise (1:1). A total of 908 (82%) participants completed the study. In high-risk individuals, the difference between conventional and intensified LI in post-challenge glucose change was -0.29 mmol/l [CI:-0.54;-0.04], p=0.025. Liver fat (-1.34 percentage points [CI:-2.17;-0.50], p=0.002) and cardiovascular risk (-1.82[CI:-3.13-0.50],p=0.007) underwent larger reductions with intensified than with conventional LI. During a follow up of 3 years, intensified compared to conventional LI had a higher probability to normalize glucose tolerance (p=0.008). In conclusion, it is possible in high-risk individuals with prediabetes to improve glycemic and cardiometabolic outcomes by intensification of LI. Individualized, risk-phenotype-based LI may be beneficial for the prevention of diabetes.


Circulation ◽  
2019 ◽  
Vol 140 (Suppl_2) ◽  
Author(s):  
Rebecca Cash ◽  
Madison K Rivard ◽  
Eric Cortez ◽  
David Keseg ◽  
Ashish Panchal

Introduction: Survival from out-of-hospital cardiac arrest (OHCA) has significant variation which may be due to differing rates of bystander cardiopulmonary resuscitation (BCPR). Defining and understanding the community characteristics of high-risk areas (census tracts with low BCPR rates and high OHCA incidence) can help inform novel interventions to improve outcomes. Our objectives were to identify high and low risk census tracts in Franklin County, Ohio and to compare the OHCA incidence, BCPR rates, and community characteristics. Methods: This was a cross-sectional analysis of OHCA events treated by Columbus Division of Fire in Franklin County, Ohio from the Cardiac Arrest Registry to Enhance Survival between 1/1/2010-12/31/2017. Included cases were 18 and older, with a cardiac etiology OHCA in a non-healthcare setting, with EMS resuscitation attempted. After geocoding to census tracts, Local Moran’s I and quartiles were used to determine clustering in high risk areas based on spatial Empirical Bayes smoothed rates. Community characteristics, from the 2014 American Community Survey, were compared between high and low risk areas. Results: From the 3,841 included OHCA cases, the mean adjusted OHCA incidence per census tract was 0.67 per 1,000 with a mean adjusted BCPR rate of 31% and mean adjusted survival to discharge of 9.4%. In the 25 census tracts identified as high-risk areas, there were significant differences in characteristics compared to low-risk areas, including a higher proportion of African Americans (64% vs. 21%, p<0.001), lower median household income ($30,948 vs. $54,388, p<0.001), and a higher proportion living below the poverty level (36% vs. 20%, p<0.001). There was a 3-fold increase in the adjusted OHCA incidence between high and low risk areas (1.68 vs. 0.57 per 1,000, p<0.001) with BCPR rates of 27% and 31% (p=0.31), respectively. Compared to a previous analysis, 9 (36%) census tracts persisted as high-risk but an additional 16 were newly identified. Conclusions: Neighborhood-level variations in OHCA incidence are dramatic with marked disparities in characteristics between high and low risk areas. It is possible that improving OHCA outcomes requires multifaceted interventions to address social determinants of health.


Circulation ◽  
2015 ◽  
Vol 132 (suppl_3) ◽  
Author(s):  
Yoshitaka Ito ◽  
Kazuhiro Naito ◽  
Katsuhisa Waseda ◽  
Hiroaki Takashima ◽  
Akiyoshi Kurita ◽  
...  

Background: While anticoagulant therapy is standard management for atrial fibrillation (Af), dual antiplatelet therapy (DAPT) is needed after stent implantation for coronary artery disease. HAS-BLED score estimates risk of major bleeding for patients on anticoagulation to assess risk-benefit in Af care. However, it is little known about usefulness of HAS-BLED score in Af patient treated with coronary stents requiring DAPT or DAPT plus warfarin (triple therapy: TT). The aim of this study was to evaluate the role of HAS-BLED score on major bleeding in Af patients undergoing DAPT or TT. Methods: A total of 837 consecutive patients were received PCI in our hospital from Jan. 2007 to Dec. 2010, and 66 patients had Af or paroxysmal Af at the time of PCI. Clinical events including major bleeding (cerebral or gastrointestinal bleeding) were investigated up to 3 years. Patients were divided into 2 groups based on HAS-BLED score (High-risk group: HAS-BLED score≥4, n=19 and Low-risk group: HAS-BLED score<4, n=47). DAPT therapy was required for a minimum 12 months after stent implantation and warfarin was prescribed based on physicians’ discretion. Management/change of antiplatelet and anticoagulant therapy during follow-up periods were also up to physicians’ discretion. Results: Baseline characteristics were not different between High-risk and Low-risk group except for age. Overall incidence of major bleeding was observed in 8 cases (12.1%) at 3 years follow-up. Major bleeding event was significantly higher in High-risk group compared with Low-risk group (31.6% vs. 4.3%, p=0.002). However, management of DAPT and TT was not different between the 2 groups. Among component of HAS-BLED score, renal dysfunction and bleeding contributed with increased number of the score. Conclusion: High-risk group was more frequently observed major bleeding events compared with Low-risk group in patients with Af following DES implantation regardless of antiplatelet/anticoagulant therapy.


2020 ◽  
Vol 12 (16) ◽  
pp. 6305 ◽  
Author(s):  
Edris Alam

Over the last thirty years, Bangladesh has been experiencing hill cutting problems and subsequent landslide occurrence in its southeastern hilly region. Since 2000, landslides have caused over 500 deaths, mostly in informal settlements in southeast Bangladesh. The most significant single event was the 2007 landslide causing 127 deaths in Chittagong’s informal settlements. The landslide events took over 110 people in Rangamati on 12 June 2017. In the scenario of rising deaths by landslides in the southeastern region, this research aimed to understand communities’ landslide hazard knowledge, reasons for living in at-risk areas, risk perception and preparedness. This research applied both quantitative (i.e., structural questionnaire) and qualitative (i.e., semi-structured and open-ended questionnaire and informal interviews) data collection techniques to assess hill-top and hill-side dwellers knowledge, risk perception and preparedness to landslides in southeast Bangladesh. The investigation conducted face-to-face interviews with 208 community members, 15 key informant interviews, three Focus Group Discussions (FGDs) and field observations and visits in southeast Bangladesh. The findings suggest that unplanned development activities, overpopulation, settlement along hill slopes and ineffective disaster risk reduction efforts are the anthropogenic contributories accompanying climate-change induced increased torrential rainfall are the main reasons for the increase of landslide occurrence. The results suggest that community members perceive a low-risk for landslides, despite the community’s location in high-risk areas. The community’s perception of low risk results in a lack of preparedness and an unwillingness to relocate a comparatively safer place. It was noted that landslide disaster preparation depends on the communities’ development maturity, house ownership, ethnicity, gender and economic status of the residents. It is suggested that the place of relocation for residents living in the high-risk areas should be selected with full consideration of psychosocial aspects of the community, particularly providing acceptable livelihood options.


2020 ◽  
Vol 41 (Supplement_1) ◽  
Author(s):  
W Sun ◽  
B P Y Yan

Abstract Background We have previously demonstrated unselected screening for atrial fibrillation (AF) in patients ≥65 years old in an out-patient setting yielded 1-2% new AF each time screen-negative patients underwent repeated screening at 12 to 18 month interval. Selection criteria to identify high-risk patients for repeated AF screening may be more efficient than repeat screening on all patients. Aims This study aimed to validate CHA2DS2VASC score as a predictive model to select target population for repeat AF screening. Methods 17,745 consecutive patients underwent 24,363 index AF screening (26.9% patients underwent repeated screening) using a handheld single-lead ECG (AliveCor) from Dec 2014 to Dec 2017 (NCT02409654). Adverse clinical outcomes to be predicted included (i) new AF detection by repeated screening; (ii) new AF clinically diagnosed during follow-up and (ii) ischemic stroke/transient ischemic attack (TIA) during follow-up. Performance evaluation and validation of CHA2DS2VASC score as a prediction model was based on 15,732 subjects, 35,643 person-years of follow-up and 765 outcomes. Internal validation was conducted by method of k-fold cross-validation (k = n = 15,732, i.e., Leave-One-Out cross-validation). Performance measures included c-index for discriminatory ability and decision curve analysis for clinical utility. Risk groups were defined as ≤1, 2-3, or ≥4 for CHA2DS2VASC scores. Calibration was assessed by comparing proportions of actual observed events. Results CHA2DS2VASC scores achieved acceptable discrimination with c-index of 0.762 (95%CI: 0.746-0.777) for derivation and 0.703 for cross-validation. Decision curve analysis showed the use of CHA2DS2VASC to select patients for rescreening was superior to rescreening all or no patients in terms of net benefit across all reasonable threshold probability (Figure 1, left). Predicted and observed probabilities of adverse clinical outcomes progressively increased with increasing CHA2DS2VASC score (Figure 1, right): 0.7% outcome events in low-risk group (CHA2DS2VASC ≤1, predicted prob. ≤0.86%), 3.5% intermediate-risk group (CHA2DS2VASC 2-3, predicted prob. 2.62%-4.43%) and 11.3% in high-risk group (CHA2DS2VASC ≥4, predicted prob. ≥8.50%). The odds ratio for outcome events were 4.88 (95%CI: 3.43-6.96) for intermediate-versus-low risk group, and 17.37 (95%CI: 12.36-24.42) for high-versus-low risk group.  Conclusion Repeat AF screening on high-risk population may be more efficient than rescreening all screen-negative individuals. CHA2DS2VASC scores may be used as a selection tool to identify high-risk patients to undergo repeat AF screening. Abstract P9 Figure 1


Sign in / Sign up

Export Citation Format

Share Document