Journal of Research of the National Institute of Standards and Technology
Latest Publications


TOTAL DOCUMENTS

1376
(FIVE YEARS 114)

H-INDEX

61
(FIVE YEARS 4)

Published By National Institute Of Standards And Technology

2165-7254, 1044-677x

Author(s):  
Stephen M. Zimmerman ◽  
Carl G. Simon Jr. ◽  
Greta Babakhanova

The AbsorbanceQ app converts brightfield microscope images into absorbance images that can be analyzed and compared across different operators, microscopes, and time. Because absorbance-based measurements are comparable across these parameters, they are useful when the aim is to manufacture biotherapeutics with consistent quality. AbsorbanceQ will be of value to those who want to capture quantitative absorbance images of cells. The AbsorbanceQ app has two modes - a single image processing mode and a batch processing mode for multiple images. Instructions for using the app are given on the ‘App Information’ tab when the app is opened. The input and output images for the app have been defined, and synthetic images were used to validate that the output images are correct. This article provides a description of how to use the app, software specifications, a description of how the app works, instructive advice on how to use the tools and a description of the methods used to generate the software. In addition, links are provided to a website where the app and test images are deployed.


Author(s):  
Jon Geist ◽  
Michael Gaitan

We simulated the effects of gimbal-alignment errors and rotational step-size errors on measurements of the sensitivity matrix and intrinsic properties of a triaxial accelerometer. We restricted the study to measurements carried out on a two-axis calibration system using a previously described measurement and analysis protocol. As well as imperfections in the calibration system, we simulated imperfect orthogonality of the accelerometer axes and non-identical sensitivity of the individual accelerometers in an otherwise perfect triaxial accelerometer, but we left characterization of other accelerometer imperfections such as non-linearity for future study. Within this framework, sensitivity-matrix errors are caused by imperfections in the construction and installation of the accelerometer calibration system, but not by the accelerometer imperfections included in the simulations. We use the results of this study to assign type B uncertainties to the components of the sensitivity matrix and related intrinsic properties due to imperfections in the measurement system. For calibrations using a reasonably well manufactured and installed multi-axis rotation stage such as that studied in this paper, we estimated upper bounds to the standard uncertainties of the order of 1 ×10−5 , 2 ×10−5 , 2 ×10−4 , and 5 ×10−5 for the intrinsic sensitivities, diagonal elements of the sensitivity matrix, off-diagonal elements of the sensitivity matrix, and zero-acceleration offsets, relative to a sensitivity-matrix element of 1, respectively, and 5 ×10−3 degrees for the intrinsic angles


Author(s):  
Nathan A. Mahynski ◽  
Vincent K. Shen ◽  
Jared M. Ragland ◽  
Stacy S. Schuur ◽  
Rebecca Pugh

The multi-entity, long-term Seabird Tissue Archival and Monitoring Project (STAMP) has collected eggs from various avian species throughout the North Pacifc Ocean for over 20 years to create a geospatial and temporal record of environmental conditions. Over 2,500 samples are currently archived at the NIST Biorepository at Hollings Marine Laboratory in Charleston, South Carolina. Longitudinal monitoring efforts of this nature provide invaluable data for assessment of both wildlife and human exposures as these species often consume prey (e.g., fish) similar to, and from sources (e.g., oceanic) comparable to, human populations nearby. In some areas, seabird eggs also comprise a signifcant part of subsistence diets providing nutrition for indigenous peoples. Chemometric profles and related health implications are known to differ across species. Eggs, however, can be diffcult to assign to a species unless the bird is observed on the nest from which the sample was collected due to similar appearance within a genus and sympatric nesting behavior. This represents a large point of uncertainty for both wildlife managers and exposure researchers alike.


Author(s):  
Niksa Blonder ◽  
Frank Delaglio

The Nuclear Magnetic Resonance Spectral Measurement Database (NMR-SMDB) was developed for the purpose of organizing and searching NMR spectral data of protein therapeutics, linking spectra to corresponding sample information and enabling quick access to full datasets and entire studies. In addition to supporting internal research at the National Institute of Standards and Technology (NIST), the system could facilitate data access to stakeholders outside of NIST, and future versions of the database software itself could be installed by others for their own data storage and retrieval.


Author(s):  
Daniel W. Siderius

Sorption isotherms collected from tables in the seminal dissertation, “The Thermodynamics and Hysteresis of Adsorption” by A. J. Brown, have been digitized and made publicly available, along with supporting software scripts that facilitates usage of the data. The isotherms include laboratory measurements of xenon, krypton, and carbon dioxide adsorption (and, when possible, desorption) isotherms on a single sample of Vycor glass1, at various temperatures including subcritical conditions for xenon and krypton. The highlight of this dataset is the collection of “scanning” isotherms for xenon on Vycor at 131 K. The scanning isotherms examine numerous trajectories through the adsorption-desorption hysteresis region, such as primary adsorption and desorption scanning isotherms that terminate at the hysteresis boundary, secondary scanning isotherms made by selective reversals that return to the boundary, and closed scanning loops. This dataset was originally used to test the independent domain theory of adsorption and continues to support successor theories of adsorption/desorption scanning hysteresis including more recent theories based on percolation models. Through digital preservation and release of the tables from Brown’s dissertation, these data are now more easily accessible and can continue to find use in developing models of adsorption for fundamental and practical applications.


Author(s):  
Jeffrey T. Fong ◽  
N. Alan Heckert ◽  
James J. Filliben ◽  
Pedro V. Marcal ◽  
Stephen W. Freiman

Three types of uncertainties exist in the estimation of the minimum fracture strength of a full-scale component or structure size. The first, to be called the “model selection uncertainty,” is in selecting a statistical distribution that best fits the laboratory test data. The second, to be called the “laboratory-scale strength uncertainty,” is in estimating model parameters of a specific distribution from which the minimum failure strength of a material at a certain confidence level is estimated using the laboratory test data. To extrapolate the laboratory-scale strength prediction to that of a full-scale component, a third uncertainty exists that can be called the “full-scale strength uncertainty.” In this paper, we develop a three-step approach to estimating the minimum strength of a full-scale component using two metrics: One metric is based on six goodness-of-fit and parameter-estimation-method criteria, and the second metric is based on the uncertainty quantification of the so-called A-basis design allowable (99 % coverage at 95 % level of confidence) of the full-scale component. The three steps of our approach are: (1) Find the “best” model for the sample data from a list of five candidates, namely, normal, two-parameter Weibull, three-parameter Weibull, two-parameter lognormal, and three-parameter lognormal. (2) For each model, estimate (2a) the parameters of that model with uncertainty using the sample data, and (2b) the minimum strength at the laboratory scale at 95 % level of confidence. (3) Introduce the concept of “coverage” and estimate the full-scale allowable minimum strength of the component at 95 % level of confidence for two types of coverages commonly used in the aerospace industry, namely, 99 % (A-basis for critical parts) and 90 % (B-basis for less critical parts). This uncertainty-based approach is novel in all three steps: In step-1 we use a composite goodness-of-fit metric to rank and select the “best” distribution, in step-2 we introduce uncertainty quantification in estimating the parameters of each distribution, and in step-3 we introduce the concept of an uncertainty metric based on the estimates of the upper and lower tolerance limits of the so-called A-basis design allowable minimum strength. To illustrate the applicability of this uncertainty-based approach to a diverse group of data, we present results of our analysis for six sets of laboratory failure strength data from four engineering materials. A discussion of the significance and limitations of this approach and some concluding remarks are included.


Author(s):  
Ulrich K. Deiters ◽  
Ian H. Bell

The multicomplex finite-step method for numerical differentiation is an extension of the popular Squire–Trapp method, which uses complex arithmetics to compute first-order derivatives with almost machine precision. In contrast to this, the multicomplex method can be applied to higher-order derivatives. Furthermore, it can be applied to functions of more than one variable and obtain mixed derivatives. It is possible to compute various derivatives at the same time. This work demonstrates the numerical differentiation with multicomplex variables for some thermodynamic problems. The method can be easily implemented into existing computer programs, applied to equations of state of arbitrary complexity, and achieves almost machine precision for the derivatives. Alternative methods based on complex integration are discussed, too.


Author(s):  
Yuqin Zong ◽  
Jeff Hulett ◽  
Naomasa Koide ◽  
Yoshiki Yamaji ◽  
C. Cameron Miller

Limited sources exist for the application of germicidal ultraviolet (GUV) radiation. Ultraviolet light-emitting diodes (UV-LEDs) have significantly improved in efficiency and are becoming another viable source for GUV. We have developed a mean differential continuous pulse method (M-DCP method) for optical measurements of light-emitting diodes (LEDs) and laser diodes (LDs). The new M-DCP method provides an improvement on measurement uncertainty by one order of magnitude compared to the unpublished differential continuous pulse method (DCP method). The DCP method was already a significant improvement of the continuous pulse method (CP method) commonly used in the LED industry. The new M-DCP method also makes it possible to measure UV-LEDs with high accuracy. Here, we present the DCP method, discuss the potential systematic error sources in it, and present the M-DCP method along with its reduced systematic errors. This paper also presents the results of validation measurement of LEDs using the M-DCP method and common test instruments.


Author(s):  
Kevin J Coakley

In experiments in a range of felds including fast neutron spectroscopy and astroparticle physics, one can discriminate events of interest from background events based on the shapes of electronic pulses produced by energy deposits in a detector. Here, I focus on a well-known pulse shape discrimination method based on the ratio of the temporal integral of the pulse over an early interval Xp and the temporal integral over the entire pulse Xt . For both event classes, for both a Gaussian noise model and a Poisson noise model, I present analytic expressions for the conditional distribution of Xp given knowledge of the observed value of Xt and a scaled energy deposit corresponding to the product of the full energy deposit and a relative yield factor. I assume that the energy-dependent theoretical prompt fraction for both classes are known exactly. With a Bayesian approach that accounts for imperfect knowledge of the scaled energy deposit, I determine the posterior mean background acceptance probability given the target signal acceptance probability as a function of the observed value of Xt . My method enables one to determine receiver-operating-characteristic curves by numerical integration rather than by Monte Carlo simulation for these two noise models.


Author(s):  
Dilip K. Banerjee

Structural design for fire is conceptually similar to structural design conducted under ambient temperature conditions. Such design requires an establishment of clear objectives and determination of the severity of the design fire. In the commonly used prescriptive design method for fire, fire resistance (expressed in hours) is the primary qualification metric. This is an artifact of the standard fire tests that are used to determine this quantity. When conducting a performance-based approach for structural design for fire, it is important to determine structural member temperatures accurately when the members are exposed to a real fire. In order to evaluate the fire resistance of structural members such as structural steels and concrete, both the temporal and spatial variation of temperatures must be accurately determined. The transient temperature profiles in structural members during exposure to a fire can be determined from a heat transfer analysis. There are several models/approaches for analyzing heat transfer that have been used to determine the transient structural temperatures during a fire event. These range from simple models to advanced models involving three-dimensional heat transfer analysis employing finite element or finite difference techniques. This document provides a brief summary of some of the common simple and advanced approaches that have been used for conducting heat transfer analysis of both steel and concrete members when exposed to fire.


Sign in / Sign up

Export Citation Format

Share Document