Boosting HSE Management More Efficiently and Sustainably: How Innovation Can Bring Change in Traditional HSE Mindsets?

2021 ◽  
Author(s):  
Hideharu Yonebayashi ◽  
Atsushi Kobayashi ◽  
Susumu Hirano ◽  
Masami Okawara ◽  
Takao Iwata

Abstract As a part of laboratory Health, Safety and Environment (HSE) management system, the working environment control is applied to eliminate various occupational hazards for workers. This control is a continuous effort in our petroleum R&D laboratory as the working environment management system. As an element in the management system, workplace inspection has been taken into the regular HSE activity. Even traditional and well established, the workplace inspection has been continuously improved and optimized from various aspect of inspection design, inspection members selection, check list, and feedback. To make the continual improving practices more practical and effective, workplace features such as laboratory specific environment and ad-hoc research programs have been incorporated into the inspection design. All findings are summarized immediately after every inspection, and subsequently which types of risks hidden in the findings and necessary corrective actions are discussed. All of them: findings, risks, and corrective measures should be swiftly shared with all employees in the workplace. A check list format has been optimized from both aspects of easier recording by inspectors and correctly feedback to responsible personnel to take right counter measures. The paper analyses a large data of workplace inspection results in recent 10 years. The analysis reveals that hazardous sources are decreasing in recent years because of maturity of HSE culture in our laboratory. A combined cycle of inspection activity and data analysis would be useful for understanding the current status of working environment control and considering further updating plan. This paper discusses a practical example of laboratory HSE management system from both of detailed and high levels. Furthermore, a potential is discussed for a future workplace inspection using artificial intelligence and deep learning. The enterprising discussion contributes employee's traditional mindset fresh.

2019 ◽  
Vol 97 (Supplement_3) ◽  
pp. 52-53
Author(s):  
Ignacy Misztal

Abstract Early application of genomic selection relied on SNP estimation with phenotypes or de-regressed proofs (DRP). Chips of 50k SNP seemed sufficient. Estimated breeding value was an index with parent average and deduction to eliminate double counting. Use of SNP selection or weighting increased accuracy with small data sets but less or none with large data sets. Use of DRP with female information required ad-hoc modifications. As BLUP is biased by genomic selection, use of DRP under genomic selection required adjustments. Efforts to include potentially causative SNP derived from sequence analysis showed limited or no gain. The genomic selection was greatly simplified using single-step GBLUP (ssGBLUP) because the procedure automatically creates the index, can use any combination of male and female genotypes, and accounts for preselection. ssGBLUP requires careful scaling for compatibility between pedigree and genomic relationships to avoid biases especially under strong selection. Large data computations in ssGBLUP were solved by exploiting limited dimensionality of SNP due to limited effective population size. With such dimensionality ranging from 4k in chicken to about 15k in Holsteins, the inverse of GRM can be created directly (e.g., by the APY algorithm) in linear cost. Due to its simplicity and accuracy ssGBLUP is routinely used for genomic selection by major companies in chicken, pigs and beef. ssGBLUP can be used to derive SNP effects for indirect prediction, and for GWAS, including computations of the P-values. An alternative single-step called ssBR exists that uses SNP effects instead of GRM. As BLUP is affected by pre-selection, there is need for new validation procedures unaffected by selection, and for parameter estimation that accounts for all the genomic data used in selection. Another issue are reduced variances due to the Bulmer effect.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Andrea Duggento ◽  
Marco Aiello ◽  
Carlo Cavaliere ◽  
Giuseppe L. Cascella ◽  
Davide Cascella ◽  
...  

Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.


2021 ◽  
Vol 28 (1) ◽  
pp. e100307
Author(s):  
Janice Miller ◽  
Frances Gunn ◽  
Malcolm G Dunlop ◽  
Farhat VN Din ◽  
Yasuko Maeda

ObjectivesA customised data management system was required for a rapidly implemented COVID-19-adapted colorectal cancer pathway in order to mitigate the risks of delayed and missed diagnoses during the pandemic. We assessed its performance and robustness.MethodsA system was developed using Microsoft Excel (2007) to retain the spreadsheets’ intuitiveness of direct data entry. Visual Basic for Applications (VBA) was used to construct a user-friendly interface to enhance efficiency of data entry and segregate the data for operational tasks.ResultsLarge data segregation was possible using VBA macros. Data validation and conditional formatting minimised data entry errors. Computation by the COUNT function facilitated live data monitoring.ConclusionIt is possible to rapidly implement a makeshift database system with clinicians’ regular input. Large-volume data management using a spreadsheet system is possible with appropriate data definition and VBA-programmed data segregation. The described concept is applicable to any data management system construction requiring speed and flexibility in a resource-limited situation.


2014 ◽  
Vol 4 (2) ◽  
pp. 35-45
Author(s):  
Margarita Jaitner

The increased adoption of social media has presented security and law enforcement authorities with significant new challenges. For example, the Swedish Security Service (SÄPO) asserts that a large proportion of radicalization takes place in open fora online. Still, approaches to contain social media-driven challenges to security, particularly in democratic societies, remain little explored. Nonetheless, this type of knowledge may become relevant in European countries in the near future: Amongst other factors, the challenging economic situation has resulted in increased public discontent leading to emergence or manifestation of groups that seek to challenge the existing policies by almost any means. Use of social media multiplies the number of vectors that need law enforcement attention. First, a high level of social media adaption allows groups to reach and attract a wider audience. Unlike previously, many groups today consist of a large but very loosely connected network. This lack of cohesion can present a challenge for authorities, to identify emerging key actors and assess threat levels. Second, a high level of mobile web penetration has allowed groups to ad-hoc organize, amend plans and redirect physical activities. Third, the tool social media is as not exclusive to potential perpetrators of unlawful action, but is as well available to law enforcement authorities. Yet, efficient utilization of social media requires a deep understanding of its nature and a well-crafted, comprehensive approach. Acknowledging the broad functionality of social media, as well as its current status in the society, this article describes a model process for security authorities and law enforcement work with social media in general and security services work in particular. The process is cyclic and largely modular. It provides a set of goals and tasks for each stage of a potential event, rather than fixed activities. This allows authorities to adapt the process to individual legal frameworks and organization setups. The approach behind the process is holistic where social media is regarded as both source and destination of information. Ultimately, the process aims at efficiently and effectively mitigating the risk of virtual and physical violence.


Sign in / Sign up

Export Citation Format

Share Document