Bewertung amorpher Kieselsäuren an Arbeitsplätzen – Vergleich der Analysenverfahren in Deutschland und den USA/Evaluation of amorphous silicas at workplaces – comparison of the analysis methods in Germany and the USA

Gefahrstoffe ◽  
2021 ◽  
Vol 81 (03-04) ◽  
pp. 109-115
Author(s):  
Markus Mattenklott ◽  
Sandra Boos

Unter dem Begriff amorphe Kieselsäuren wird eine Vielzahl von Stoffen zusammengefasst, die im Wesentlichen aus SiO2 mit unterschiedlichen Anteilen an H2O und in der Regel nur sehr geringen Anteilen anderer Elemente bestehen. Zu diesen gehören z. B. kolloidale Kieselsäuren (pyrogene, Gel- und Fällungskieselsäuren), Kieselglas, Kieselgut, gebrannte und ungebrannte Kieselguren und Kieselrauch. Abhängig von der gesundheitlichen Gefährdung werden Kieselglas, Kieselgut, Kieselrauch und gebrannte Kieselguren mit dem Grenzwert von 0,3 mg/m3 in der alveolengängigen Staubfraktion (A-Staub) und alle übrigen amorphen Kieselsäuren mit dem Grenzwert von 4 mg/m³ in der einatembaren Staubfraktion (E-Staub) bewertet. Neben dem in Deutschland seit Jahrzehnten etablierten Analysenverfahren zur direkten Bestimmung amorpher Kieselsäuren wird in den USA ein indirektes Verfahren eingesetzt, bei dem die amorphe Kieselsäure zunächst durch Glühen in Cristobalit umgewandelt wird (NIOSH Manual of Analytical Methods, NMAM 7501). Aktuelle Versuchsreihen konnten zeigen, dass abhängig von der Art der amorphen Kieselsäuren und deren Glühbedingungen unterschiedliche Anteile von Cristobalit ausgebildet werden. Grundsätzlich sind selbst bei Reinstsubstanzen deutliche Minderbefunde durch das indirekte Verfahren festzustellen. Diese fallen noch signifikanter aus, wenn amorphe Kieselsäuren in Mischstäuben auftreten, was die Regel ist. Die Anwendung des amerikanischen Analysenverfahrens kann daher nicht empfohlen werden. Da die Messungen in den USA zudem mit Bezug auf die Staubfraktion „Total Dust“ durchgeführt werden, besteht auch bezüglich der Probenahme keine Möglichkeit, etwaige Datenkollektive beider Länder zu vergleichen.

Author(s):  
Richard Mattessich

It was with particular pleasure that, several years ago, I accepted the invitation of ChuoUniversity to write a professional, biographical essay about my own experience with accounting. My relation with this university is a long-standing one. Shortly after two of my books, Accounting and Analytical Methods and Simulation of the Firm Through a Budget Computer Program, were published in the USA in 1964, Professor Kenji Aizaki (then at Chuo University) and his former student, Professor Fujio Harada, and later other scholars from Chuo University, began actively promoting my ideas in Japan. And after a two volume Japanese translation of the first of these books was published in 1972 and 1975 (through the mediation of Professor Shinzaburo Koshimura, then President of Yokohama National University), my research found fertile ground in Japan through continuing efforts of three generations of accounting academics from Chuo University. I suppose it is thanks to these endeavours that my efforts became so well known in Japan, and that during some three decades many Japanese accounting professors contacted me either personally or by correspondence. Then from 1988 to 1990 Prof. Yoshiaki Koguchi, again from Chuo University, came as a visiting scholar to the University of British Columbia, audited some of my classes, and became a good friend and collaborator, which further strengthened my ties to this university.


2020 ◽  
Vol 11 (1) ◽  
pp. 24
Author(s):  
Samir Rachid Zaim ◽  
Colleen Kenost ◽  
Hao Helen Zhang ◽  
Yves A. Lussier

Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of transcriptomes and offers extensions into proteomes and metabolomes. In evaluation frameworks for which the distributional assumptions of statistical testing imperfectly model genome dynamics of gene products, artefacts and biases are confounded with authentic signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards. In such studies, replicated biases are confounded with measures of accuracy. We hypothesized that developing method-agnostic reference standards would reduce such replication biases. We propose to evaluate discovery methods with a reference standard derived from a consensus of analytical methods distinct from the discovery one to minimize statistical artefact biases. Our methods involve thresholding effect-size and expression-level filtering of results to improve consensus between analytical methods. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression counts (hereinafter “expression”) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: Breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. Furthermore, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate biases towards a specific analytical method, the pairwise Jaccard Concordance Index between observed results of distinct analytical methods were calculated for optimization. Optimization through thresholding effect-size and expression-level reduced the greatest discordances between distinct methods’ analytical results and resulted in a 65% increase in concordance. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization in transcriptomics requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: Fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and prediction), most of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results requiring subsequent filtering to increase reliability. Conventional single-subject studies pertain to one or a few patient’s measures over time and require a substantial conceptual framework extension to address the numerous measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges for improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards.


2020 ◽  
pp. 722-727
Author(s):  
Stephen N. Walford ◽  
Camille Roussel

The International Commission for Uniform Methods of Sugar Analysis Ltd (ICUMSA) is the only international organisation involved with the development and testing of analytical methods for use within the sugar industry. Analysis methods undergo a rigorous approval procedure, progressing from Tentative to Official or, Official (Reference) method status before being published in the Methods Book. As an example of these processes, the comparison of two alternative computations (Berding and Pollock; Hamna) used in a proposed hydraulic-press method for cane evaluation is described. The press method was compared to the Official Method for cane analysis (GS5/7-1 (2011)) that is based on wet disintegration of cane, by comparing the results obtained from 79 samples split into three subsamples analysed by the three methods. Correlations of greater than 0.9 were recorded for the press method’s sugar and fibre results compared against the existing method. A recommendation was made to ICUMSA, based on the study, that a draft method should be written for cane evaluation using the press method and Berding and Pollock computations.


2021 ◽  
Vol 108 ◽  
pp. 04003
Author(s):  
Sergey Aleksandrovich Nasonov ◽  
Yuliya Vladimirovna Strelkova

A prerequisite of the research of issues covered by this paper is the relevance of the existing problem of collecting and checking personal data of jury members (or jury member candidates) to verify information reported by them at the stage of jury formation. The purpose of this research is to define a link between checking the personal data of jury members and the legal nature of jury proceedings and to find a balance between the need to restrict access need to ensure the legal composition of the court. To address these issues, the paper studies doctrinal approaches of Russian and foreign scientific literature, uses analysis methods, legal-technical and comparative-legal methods. This research has found that collecting and verifying personal data of jury members is typical of Russian and foreign models of proceedings in a jury court. However, a balance between the need of such research and ensuring the independence of jury members in the Russian court practice is not always observed, which is caused by gaps in legal regulation. Therefore, it seems interesting to study approaches to legal regulation of such practice in foreign models (anonymous jury and conditions of disclosing data of jury members; restricting the right to collect data of jury members in the USA after adjuration). These research results are new since they have not been described in the Russian legal periodicals and monographic literature. New research results are represented by describing problems of obtaining information of jury members by prosecution authorities, which is relevant for Russia and has never been studied before.


2021 ◽  
Vol 13 (2) ◽  
pp. 027-052
Author(s):  
Evgeny V. Balatsky ◽  
◽  
Natalia A. Ekimova ◽  
Olga V. Tretyackova ◽  
◽  
...  

The paper presents a review regarding the current methods of evaluating, ranking, and rating academic economics journals in Russia and other states – the USA, the EU countries and China. In order to do this, the authors introduce a common typology of analytical methods which includes bibliometric, peer review, network (invariant), hybrid, and consolidated algorithms for evaluating the quality of journals. The article shows that the differentiation of periodicals can be performed using quantitative and qualitative approaches. The authors explore the usage of so-called institutional filters, which are the rules for regulating the academic sphere via specialized tools for assessing the academic economics journals. The findings reveal that the trend established in the beginning of the 21st century concerning the application of international databases’ formal metrics in science policy hides a lot of threats. Among them status losses by some Russian academic economics journals maintaining high academic standards, which are not covered by international databases, as well as various consequences of manipulating practices and setting of obviously unrealistic targets can be named. The authors outline the conditions for the efficient usage of formal academic economics journals metrics as institutional filters. The paper proves that current analytical methods for evaluating the periodicals quality allow to carry out their reasonable and unbiased ranking for research needs of Russian economists.


Author(s):  
Vladimíra Osadská

Abstract In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.


Author(s):  
Samir Rachid Zaim ◽  
Colleen Kenost ◽  
Hao Helen Zhang ◽  
Yves A. Lussier

Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of metabolomes, proteomes, or transcriptomes. Since distributional assumptions of statistical testing may inadequately model genome dynamics of gene products, the so-called significant results of previous studies may artefactually conflate with real signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards, as corroboration of results leads to an evaluation of reproducibility confounded with replicated biases rather than a measure of accuracy. We hypothesized that developing method-agnostic reference standards using effect-size and expression-level filtering of results, obtained from multiple discovery methods that are distinct from the one evaluated, would maximize the evaluation of clinical-transcriptomic signals and minimize statistical artefactual biases. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression levels (EL) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. In addition, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate for biased optimization of the RS parameters towards a specific analytical method, similarity between observed results of distinct analytical methods were calculated across all methods (Jaccard Concordance Index). The greatest differences were observed across diametric extremes. For example, filtering out differentially expressed genes (DEGs) using a fold change < 1.2 leads to a 50% increase in concordance between techniques when compared to results with FC > 1.2. Combining this FC cutoff with genes with mean expressions > 30 counts leads to a 65% increase in concordance in comparison to genes with expression levels < 30 counts and with FC < 1.2. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and for prediction), a majority of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results that require subsequent filtering for increasing their reliability. Conventional single-subject studies pertain to one or a few measures in one patient over time [1]and need a substantial conceptual framework extension in order to address the tens of thousands of measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges in improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards. Github: https://github.com/SamirRachidZaim/referenceNof1


Author(s):  
Kanhaiya L. Bardia ◽  
Donald G. LaBounty ◽  
Michael M. Basic ◽  
Timothy D. Breig

The 2007 ASME Section VIII, Division 2, Part 5 Design-By-Analysis provides requirements for vessels and components using analytical methods. The authors have spent a considerable amount of time in studying and applying the requirements of the design by analysis methods in Part 5. As a result of this process, the authors have concluded that the main factor for weight savings is the stress criteria used to calculate the thickness by the design-by-analysis method. An example is provided to demonstrate the methods presented.


Author(s):  
J.R. McIntosh ◽  
D.L. Stemple ◽  
William Bishop ◽  
G.W. Hannaway

EM specimens often contain 3-dimensional information that is lost during micrography on a single photographic film. Two images of one specimen at appropriate orientations give a stereo view, but complex structures composed of multiple objects of graded density that superimpose in each projection are often difficult to decipher in stereo. Several analytical methods for 3-D reconstruction from multiple images of a serially tilted specimen are available, but they are all time-consuming and computationally intense.


Sign in / Sign up

Export Citation Format

Share Document