scholarly journals Beyond CQI-9: A Customized Approach to CQI-9 and Supplier Management

Author(s):  
Andrew Armuth ◽  
Nicholas Grider

Abstract The current Automotive Industry Action Group (AIAG) CQI-9 audit process has limited effectiveness to proactively detect heat treatment quality risks in Tier-1 and Tier-2 supply bases. A cross-functional engineering organization developed an improved supplier audit form using CQI-9, National Aerospace and Defense Contractors Accreditation Program (NADCAP), International Automotive Task Force (IATF), and specific internal company standards to distinguish and quantify production issues that may have been undiscovered with the existing CQI-9 approach. Representatives from engineering, commercial, and manufacturing crafted a more complete approach to supply chain quality. This new audit format (Beyond CQI-9) has demonstrated the ability to quantify heat treatment concerns, reduce future engineering resource costs, and develop new and existing heat treatment suppliers to meet world class quality levels.

Author(s):  
Carole Morris ◽  
Ashley Grey

IntroductionSince its’ inception in 2015, the NHS Scotland Public Benefit and Privacy Panel (PBPP) has approved over 200 applications for access to data. The PBPP are accountable to the public and must demonstrate their assessment of applications for data use in terms of envisaged public benefit and potential privacy risks. Objectives and ApproachIn 2017 the first annual audit took place. The purpose of the audit exercise was twofold; establish that the governance process is robust and that proportionate governance criteria are the correct measurement tool. The full PBPP committee reviewed a random selection of 10 applications approved at Tier 1 between January 2016 and December 2016. Committee members were split into groups and sent paperwork relating to the application. A review record was completed covering the questions within the proportionate governance criteria. Review records were sent to the PBPP Panel Manager for collation and an audit record compiled for each application. ResultsApplications were identified where a discrepancy existed between the Tier 1 decision and the PBPP Committee audit review. These audit records were tabled for discussion at a workshop involving a subgroup of PBPP Committee and Tier 1 panel members. From the ten applications that were randomly selected, six were consistently reviewed by both the Tier 1 and Tier 2 Committee with no referral points identified from the either Tier. 4 were identified for discussion in a workshop including representatives from both Tiers. During the discussion it was agreed that 2 out of the 4 should have triggered a further review by Tier 2 but that the decision to approve all 4 applications would have remained. Conclusion/ImplicationsThis suggests that both Tiers have a sound understanding of the proportionate governance criteria and that for the majority of applications this is being interpreted uniformly and that the audit process is required to ensure this is maintained going forward.


2021 ◽  
pp. 109830072199608
Author(s):  
Angus Kittelman ◽  
Sterett H. Mercer ◽  
Kent McIntosh ◽  
Robert Hoselton

The purpose of this longitudinal study was to examine patterns in implementation of Tier 2 and 3 school-wide positive behavioral interventions and supports (SWPBIS) systems to identify timings of installation that led to higher implementation of advanced tiers. Extant data from 776 schools in 27 states reporting on the first 3 years of Tier 2 implementation and 359 schools in 23 states reporting on the first year of Tier 3 implementation were analyzed. Using structural equation modeling, we found that higher Tier 1 implementation predicted subsequent Tier 2 and Tier 3 implementation. In addition, waiting 2 or 3 years after initial Tier 1 implementation to launch Tier 2 systems predicted higher initial Tier 2 implementation (compared with implementing the next year). Finally, we found that launching Tier 3 systems after Tier 2 systems, compared with launching both tiers simultaneously, predicted higher Tier 2 implementation in the second and third year, so long as Tier 3 systems were launched within 3 years of Tier 2 systems. These findings provide empirical guidance for when to launch Tier 2 and 3 systems; however, we emphasize that delays in launching advanced systems should not equate to delays in more intensive supports for students.


2021 ◽  
Vol 13 (15) ◽  
pp. 8420
Author(s):  
Peter W. Sorensen ◽  
Maria Lourdes D. Palomares

To assess whether and how socioeconomic factors might be influencing global freshwater finfisheries, inland fishery data reported to the FAO between 1950 and 2015 were grouped by capture and culture, country human development index, plotted, and compared. We found that while capture inland finfishes have greatly increased on a global scale, this trend is being driven almost entirely by poorly developed (Tier-3) countries which also identify only 17% of their catch. In contrast, capture finfisheries have recently plateaued in moderately-developed (Tier-2) countries which are also identifying 16% of their catch but are dominated by a single country, China. In contrast, reported capture finfisheries are declining in well-developed (Tier-1) countries which identify nearly all (78%) of their fishes. Simultaneously, aquacultural activity has been increasing rapidly in both Tier-2 and Tier-3 countries, but only slowly in Tier-1 countries; remarkably, nearly all cultured species are being identified by all tier groups. These distinctly different trends suggest that socioeconomic factors influence how countries report and conduct capture finfisheries. Reported rapid increases in capture fisheries are worrisome in poorly developed countries because they cannot be explained and thus these fisheries cannot be managed meaningfully even though they depend on them for food. Our descriptive, proof-of-concept study suggests that socioeconomic factors should be considered in future, more sophisticated efforts to understand global freshwater fisheries which might include catch reconstruction.


2020 ◽  
Vol 12 (1) ◽  
pp. 851-865
Author(s):  
Sukonmeth Jitmahantakul ◽  
Piyaphong Chenrai ◽  
Pitsanupong Kanjanapayont ◽  
Waruntorn Kanitpanyacharoen

AbstractA well-developed multi-tier polygonal fault system is located in the Great South Basin offshore New Zealand’s South Island. The system has been characterised using a high-quality three-dimensional seismic survey tied to available exploration boreholes using regional two-dimensional seismic data. In this study area, two polygonal fault intervals are identified and analysed, Tier 1 and Tier 2. Tier 1 coincides with the Tucker Cove Formation (Late Eocene) with small polygonal faults. Tier 2 is restricted to the Paleocene-to-Late Eocene interval with a great number of large faults. In map view, polygonal fault cells are outlined by a series of conjugate pairs of normal faults. The polygonal faults are demonstrated to be controlled by depositional facies, specifically offshore bathyal deposits characterised by fine-grained clays, marls and muds. Fault throw analysis is used to understand the propagation history of the polygonal faults in this area. Tier 1 and Tier 2 initiate at about Late Eocene and Early Eocene, respectively, based on their maximum fault throws. A set of three-dimensional fault throw images within Tier 2 shows that maximum fault throws of the inner polygonal fault cell occurs at the same age, while the outer polygonal fault cell exhibits maximum fault throws at shallower levels of different ages. The polygonal fault systems are believed to be related to the dewatering of sedimentary formation during the diagenesis process. Interpretation of the polygonal fault in this area is useful in assessing the migration pathway and seal ability of the Eocene mudstone sequence in the Great South Basin.


2021 ◽  
Vol 39 (28_suppl) ◽  
pp. 14-14
Author(s):  
Charu Aggarwal ◽  
Melina Elpi Marmarelis ◽  
Wei-Ting Hwang ◽  
Dylan G. Scholes ◽  
Aditi Puri Singh ◽  
...  

14 Background: Current NCCN guidelines recommend comprehensive molecular profiling for all newly diagnosed patients with metastatic non-squamous NSCLC to enable the delivery of personalized medicine. We have previously demonstrated that incorporation of plasma based next-generation gene sequencing (NGS) improves detection of clinically actionable mutations in patients with advanced NSCLC (Aggarwal et al, JAMA Oncology, 2018). To increase rates of comprehensive molecular testing at our institution, we adapted our clinical practice to include concurrent use of plasma (P) and tissue (T) based NGS upon initial diagnosis. P NGS testing was performed using a commercial 74 gene assay. We analyzed the impact of this practice change on guideline concordant molecular testing at our institution. Methods: A retrospective cohort study of patients with newly diagnosed metastatic non-squamous NSCLC following the implementation of this practice change in 12/2018 was performed. Tiers of NCCN guideline concordant testing were defined, Tier 1: complete EGFR, ALK, BRAF, ROS1, MET, RET, NTRK testing, Tier 2: included above, but with incomplete NTRK testing, Tier 3: > 2 genes tested, Tier 4: single gene testing, Tier 5: no testing. Proportion of patients with comprehensive molecular testing by modality (T NGS vs. T+P NGS) were compared using one-sided Fisher’s exact test. Results: Between 01/2019, and 12/2019, 170 patients with newly diagnosed metastatic non-Sq NSCLC were treated at our institution. Overall, 98.2% (167/170) patients underwent molecular testing, Tier 1: n = 100 (59%), Tier 2: n = 39 (23%), Tier 3/4: n = 28 (16.5%), Tier 5: n = 3 (2%). Amongst these patients, 43.1% (72/167) were tested with T NGS alone, 8% (15/167) with P NGS alone, and 47.9% (80/167) with T+P NGS. A higher proportion of patients underwent comprehensive molecular testing (Tiers 1+2) using T+P NGS: 95.7% (79/80) compared to T alone: 62.5% (45/72), p < 0.0005. Prior to the initiation of first line treatment, 72.4% (123/170) patients underwent molecular testing, Tier 1: n = 73 (59%), Tier 2: n = 27 (22%) and Tier 3/4: n = 23 (18%). Amongst these, 39% (48/123) were tested with T NGS alone, 7% (9/123) with P NGS alone and 53.6% (66/123) with T+P NGS. A higher proportion of patients underwent comprehensive molecular testing (Tiers 1+2) using T+P NGS, 100% (66/66) compared to 52% (25/48) with T NGS alone (p < 0.0005). Conclusions: Incorporation of concurrent T+P NGS testing in treatment naïve metastatic non-Sq NSCLC significantly increased the proportion of patients undergoing guideline concordant molecular testing, including prior to initiation of first-line therapy at our institution. Concurrent T+P NGS should be adopted into institutional pathways and routine clinical practice.


Circulation ◽  
2016 ◽  
Vol 133 (suppl_1) ◽  
Author(s):  
Nina P Paynter ◽  
Raji Balasubramanian ◽  
Shuba Gopal ◽  
Franco Giulianini ◽  
Leslie Tinker ◽  
...  

Background: Prior studies of metabolomic profiles and coronary heart disease (CHD) have been limited by relatively small case numbers and scant data in women. Methods: The discovery set examined 371 metabolites in 400 confirmed, incident CHD cases and 400 controls (frequency matched on age, race/ethnicity, hysterectomy status and time of enrollment) in the Women’s Health Initiative Observational Study (WHI-OS). All selected metabolites were validated in a separate set of 394 cases and 397 matched controls drawn from the placebo arms of the WHI Hormone Therapy trials and the WHI-OS. Discovery used 4 methods: false-discovery rate (FDR) adjusted logistic regression for individual metabolites, permutation corrected least absolute shrinkage and selection operator (LASSO) algorithms, sparse partial least squares discriminant analysis (PLS-DA) algorithms, and random forest algorithms. Each method was performed with matching factors only and with matching plus both medication use (aspirin, statins, anti-diabetics and anti-hypertensives) and traditional CHD risk factors (smoking, systolic blood pressure, diabetes, total and HDL cholesterol). Replication in the validation set was defined as a logistic regression coefficient of p<0.05 for the metabolites selected by 3 or 4 methods (tier 1), or a FDR adjusted p<0.05 for metabolites selected by only 1 or 2 methods (tier 2). Results: Sixty-seven metabolites were selected in the discovery data set (30 tier 1 and 37 tier 2). Twenty-six successfully replicated in the validation data set (21 tier 1 and 5 tier 2), with 25 significant with adjusting for matching factors only and 11 significant after additionally adjusting for medications and CHD risk factors. Validated metabolites included amino acids, sugars, nucleosides, eicosanoids, plasmologens, polyunsaturated phospholipids and highly saturated triglycerides. These include novel metabolites as well as metabolites such as glutamate/glutamine, which have been shown in other populations. Conclusions: Multiple metabolites in important physiological pathways with robust associations for risk of CHD in women were identified and replicated. These results may offer insights into biological mechanisms of CHD as well as identify potential markers of risk.


Author(s):  
Aletta Sophia Tolmay

The sustainability of automotive component suppliers is under threat due to various global challenges. Literature suggests that only the actual personal relationship can differentiate suppliers within supply chains. Literature further encourages more insight into the conceptualization of personal interaction and trust within supply chains. This paper reports on research that tested the importance of trust and its directional linear relationship with personal interaction. Personal interaction revealed a significant correlation with trust, indicating that actions of the Tier 2 supplier during the sourcing process can substantially influence trust with the Tier 1 buyer. It is accordingly crucial for automotive component suppliers to invest in strategies to increase their personal interaction with their buyers in order to promote trust and in turn to promote perceived customer value and customer retention.


Author(s):  
James B O'Keefe ◽  
Elizabeth J Tong ◽  
Thomas H Taylor ◽  
Ghazala D Datoo O'Keefe ◽  
David C Tong

Objective: To determine whether a risk prediction tool developed and implemented in March 2020 accurately predicts subsequent hospitalizations. Design: Retrospective cohort study, enrollment from March 24 to May 26, 2020 with follow-up calls until hospitalization or clinical improvement (final calls until June 19, 2020) Setting: Single center telemedicine program managing outpatients from a large medical system in Atlanta, Georgia Participants: 496 patients with laboratory-confirmed COVID-19 in isolation at home. Exclusion criteria included: (1) hospitalization prior to telemedicine program enrollment, (2) immediate discharge with no follow-up calls due to resolution. Exposure: Acute COVID-19 illness Main Outcome and Measures: Hospitalization was the outcome. Days to hospitalization was the metric. Survival analysis using Cox regression was used to determine factors associated with hospitalization. Results: The risk-assessment rubric assigned 496 outpatients to risk tiers as follows: Tier 1, 237 (47.8%); Tier 2, 185 (37.3%); Tier 3, 74 (14.9%). Subsequent hospitalizations numbered 3 (1%), 15 (7%), and 17 (23%) and for Tiers 1-3, respectively. From a Cox regression model with age ≥ 60, gender, and self-reported obesity as covariates, the adjusted hazard ratios using Tier 1 as reference were: Tier 2 HR=3.74 (95% CI, 1.06-13.27; P=0.041); Tier 3 HR=10.87 (95% CI, 3.09-38.27; P<0.001). Tier was the strongest predictor of time to hospitalization. Conclusions and Relevance: A telemedicine risk assessment tool prospectively applied to an outpatient population with COVID-19 identified both low-risk and high-risk patients with better performance than individual risk factors alone. This approach may be appropriate for optimum allocation of resources.


2018 ◽  
Vol 64 (3) ◽  
pp. 253-277
Author(s):  
Vighneswara Swamy

Abstract The study estimates the Basel-III capital requirement for Indian banks employing the methodology incorporating the reported tier-1, tier-2 capital, total capital and risk-weighted assets (RWAs) sourced from the Basel disclosures made by the banks on their websites. In order to understand the strategy and the response of different bank groups based on their ownership styles, this study, groups the banks into scheduled commercial banks, public sector banks group, and private banks and considers the data for the period 2002&amp;#8202;&amp;#8211;&amp;#8202;2011. The results suggest that with an assumed growth of RWAs at 10%, banks in India would require additional minimum tier-1 capital of INR 2.51 trillion. With an assumed RWAs growth at 12% and 15%, the requirement would be in the order of INR 3.36 trillion and INR 4.74 trillion respectively. JEL classifications: E44, E61, G2, G21, G28 Keywords: Basel III, capital and liquidity, commercial banks, capital, countercyclical capital buffers, financial (in)stability


Sign in / Sign up

Export Citation Format

Share Document