design matrix
Recently Published Documents


TOTAL DOCUMENTS

255
(FIVE YEARS 86)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
H.-W. Chen

Abstract. A new statistical model designed for regression analysis with a sparse design matrix is proposed. This new model utilizes the positions of the limited non-zero elements in the design matrix to decompose the regression model into sub-regression models. Statistical inferences are further made on the values of these limited non-zero elements to provide a reference for synthesizing these sub-regression models. With this concept of the regression decomposition and synthesis, the information on the structure of the design matrix can be incorporated into the regression analysis to provide a more reliable estimation. The proposed model is then applied to resolve the spatial resolution enhancement problem for spatially oversampled images. To systematically evaluate the performance of the proposed model in enhancing the spatial resolution, the proposed approach is applied to the oversampled images that are reproduced via random field simulations. These application results based on different generated scenarios then conclude the effectiveness and the feasibility of the proposed approach in enhancing the spatial resolution of spatially oversampled images.


Author(s):  
Oskar Maria Baksalary ◽  
Götz Trenkler

AbstractAn alternative look at the linear regression model is taken by proposing an original treatment of a full column rank model (design) matrix. In such a situation, the Moore–Penrose inverse of the matrix can be obtained by utilizing a particular formula which is applicable solely when a matrix to be inverted can be columnwise partitioned into two matrices of disjoint ranges. It turns out that this approach, besides simplifying derivations, provides a novel insight into some of the notions involved in the model and reduces computational costs needed to obtain sought estimators. The paper contains also a numerical example based on astronomical observations of the localization of Polaris, demonstrating usefulness of the proposed approach.


Author(s):  
Natalie Förster ◽  
Jörg-Tobias Kuhn

Abstract. To monitor students’ progress and adapt instruction to students’ needs, teachers increasingly use repeated assessments of equivalent tests. The present study investigates whether equivalent reading tests can be successfully developed via rule-based item design. Based on theoretical considerations, we identified 3-item features for reading comprehension at the word, sentence, and text levels, respectively, which should influence the difficulty and time intensity of reading processes. Using optimal design algorithms, a design matrix was calculated, and four equivalent test forms of the German reading test series for second graders (quop-L2) were developed. A total of N = 7,751 students completed the tests. We estimated item difficulty and time intensity parameters as well as person ability and speed parameters using bivariate item response theory (IRT) models, and we investigated the influence of item features on item parameters. Results indicate that all item properties significantly affected either item difficulty or response time. Moreover, as indicated by the IRT-based test information functions and analyses of variance, the four different test forms showed similar levels of difficulty and time-intensity at the word, sentence, and text levels (all η2 < .002). Results were successfully cross-validated using a sample of N = 5,654 students.


2021 ◽  
Vol 14 (12) ◽  
pp. 7909-7928
Author(s):  
Markus D. Petters

Abstract. Tikhonov regularization is a tool for reducing noise amplification during data inversion. This work introduces RegularizationTools.jl, a general-purpose software package for applying Tikhonov regularization to data. The package implements well-established numerical algorithms and is suitable for systems of up to ~1000 equations. Included is an abstraction to systematically categorize specific inversion configurations and their associated hyperparameters. A generic interface translates arbitrary linear forward models defined by a computer function into the corresponding design matrix. This obviates the need to explicitly write out and discretize the Fredholm integral equation, thus facilitating fast prototyping of new regularization schemes associated with measurement techniques. Example applications include the inversion involving data from scanning mobility particle sizers (SMPSs) and humidified tandem differential mobility analyzers (HTDMAs). Inversion of SMPS size distributions reported in this work builds upon the freely available software DifferentialMobilityAnalyzers.jl. The speed of inversion is improved by a factor of ~200, now requiring between 2 and 5 ms per SMPS scan when using 120 size bins. Previously reported occasional failure to converge to a valid solution is reduced by switching from the L-curve method to generalized cross-validation as the metric to search for the optimal regularization parameter. Higher-order inversions resulting in smooth, denoised reconstructions of size distributions are now included in DifferentialMobilityAnalyzers.jl. This work also demonstrates that an SMPS-style matrixbased inversion can be applied to find the growth factor frequency distribution from raw HTDMA data while also accounting for multiply charged particles. The outcome of the aerosol-related inversion methods is showcased by inverting multi-week SMPS and HTDMA datasets from ground-based observations, including SMPS data obtained at Bodega Marine Laboratory during the CalWater 2/ACAPEX campaign and co-located SMPS and HTDMA data collected at the US Department of Energy observatory located at the Southern Great Plains site in Oklahoma, USA. Results show that the proposed approaches are suitable for unsupervised, nonparametric inversion of large-scale datasets as well as inversion in real time during data acquisition on low-cost reducedinstruction- set architectures used in single-board computers. The included software implementation of Tikhonov regularization is freely available, general, and domain-independent and thus can be applied to many other inverse problems arising in atmospheric measurement techniques and beyond.


2021 ◽  
Vol 4 (2) ◽  
Author(s):  
Amit Saraswat ◽  
Dipak Kumar

The work done in this work deals with the efficacy of cutting parameters on surface of EN-8 alloy steel. For knowing the optimal effects of cutting parameters response surface methodology was practiced subjected to central composite design matrix. The motive was to introduce an interaction among input parameters, i.e., cutting speed, feed and depth of cut and output parameter, surface roughness. For this, second order response surface model was modeled. The foreseen values obtained were found to be fairly close to observed values, showed that the model could be practiced to forecast the surface roughness on EN-8 within the range of parameter studied. Contours and 3-D plots are generated to forecast the value of surface roughness. It was revealed that surface roughness decreases with increases in cutting speed and it increases with feed. However, there were found negligible or almost no implication of depth of cut on surface roughness whereas feed rate affected the surface roughness most. For lower surface roughness, the optimum values of each one were also evaluated.


Author(s):  
SUBASH SHRESTHA ◽  
Jerry Joseph Erdmann ◽  
Sean A Smith

The use of antimicrobials in formulations of ready-to-eat meat and poultry products has been identified as a major strategy to control Listeria monocytogenes . The USDA-FSIS recommends no more than 2-logs of Listeria outgrowth over the stated shelf life if antimicrobials are used as a control measure for a product with post-lethality environmental exposure. This study was designed to understand the efficacy of a clean label antimicrobial against the growth of L. monocytogenes as affected by the product attributes. A response surface method-central composite design was used to investigate the effects of product pH, moisture, salt content, and a commercial “clean-label” antimicrobial on the growth of L. monocytogenes in a model turkey deli meat formulation. Thirty treatment combinations of pH (6.3, 6.5, and 6.7), moisture (72, 75, and 78%), salt (1.0, 1.5, and 2.0%), and antimicrobial (0.75, 1.375, and 2.0%) with six replicated center points and eight design star points were evaluated. Treatments were surface inoculated with a 3 log 10 CFU/g target of a five-strain L. monocytogenes cocktail, vacuum packaged, and stored at 5°C for up to 16 weeks. Populations of L. monocytogenes were enumerated from triplicate samples every week until the stationary growth phase was reached. The enumeration data was fitted to a Baranyi and Roberts growth curve to calculate the lag time and maximum growth rate for each treatment.  Linear least-squares regression of the lag-time and growth-rate against the full quadratic, including the second order interaction terms, design matrix was performed. Both lag time and maximum growth rate were significantly affected ( p &lt;0.01) by the antimicrobial concentration and product pH. Product moisture and salt content affected ( p &lt;0.05) lag phase and maximum growth rate, respectively. The availability of a validated growth model assists meat scientists and processors with faster product development and commercialization.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Qile Zhao ◽  
Jing Guo ◽  
Sijing Liu ◽  
Jun Tao ◽  
Zhigang Hu ◽  
...  

AbstractThe Precise Point Positioning (PPP) technique uses a single Global Navigation Satellite System (GNSS) receiver to collect carrier-phase and code observations and perform centimeter-accuracy positioning together with the precise satellite orbit and clock corrections provided. According to the observations used, there are basically two approaches, namely, the ionosphere-free combination approach and the raw observation approach. The former eliminates the ionosphere effects in the observation domain, while the latter estimates the ionosphere effects using uncombined and undifferenced observations, i.e., so-called raw observations. These traditional techniques do not fix carrier-phase ambiguities to integers, if the additional corrections of satellite hardware biases are not provided to the users. To derive the corrections of hardware biases in network side, the ionosphere-free combination operation is often used to obtain the ionosphere-free ambiguities from the L1 and L2 ones produced even with the raw observation approach in earlier studies. This contribution introduces a variant of the raw observation approach that does not use any ionosphere-free (or narrow-lane) combination operator to derive satellite hardware bias and compute PPP ambiguity float and fixed solution. The reparameterization and the manipulation of design matrix coefficients are described. A computational procedure is developed to derive the satellite hardware biases on WL and L1 directly. The PPP ambiguity-fixed solutions are obtained also directly with WL/L1 integer ambiguity resolutions. The proposed method is applied to process the data of a GNSS network covering a large part of China. We produce the satellite biases of BeiDou, GPS and Galileo. The results demonstrate that both accuracy and convergence are significantly improved with integer ambiguity resolution. The BeiDou contributions on accuracy and convergence are also assessed. It is disclosed for the first time that BeiDou only ambiguity-fixed solutions achieve the similar accuracy with that of GPS/Galileo combined, at least in mainland China. The numerical analysis demonstrates that the best solutions are achieved by GPS/Galileo/BeiDou solutions. The accuracy in horizontal components is better than 6 mm, and in the height component better than 20 mm (one sigma). The mean convergence time for reliable ambiguity-fixing is about 1.37 min with 0.12 min standard deviation among stations without using ionosphere corrections and the third frequency measurements. The contribution of BDS is numerically highlighted.


2021 ◽  
Author(s):  
Michael F. Adamer ◽  
Sarah C. Brueningk ◽  
Alejandro Tejada-Arranz ◽  
Fabienne Estermann ◽  
Marek Basler ◽  
...  

With the steadily increasing abundance of omics data produced all over the world, sometimes decades apart and under vastly different experimental conditions residing in public databases, a crucial step in many data-driven bioinformatics applications is that of data integration. The challenge of batch effect removal for entire databases lies in the large number and coincide of both batches and desired, biological variation resulting in design matrix singularity. This problem currently cannot be solved by any common batch correction algorithm. In this study, we present reComBat, a regularised version of the empirical Bayes method to overcome this limitation. We demonstrate our approach for the harmonisation of public gene expression data of the human opportunistic pathogen Pseudomonas aeruginosa and study a several metrics to empirically demonstrate that batch effects are successfully mitigated while biologically meaningful gene expression variation is retained. reComBat fills the gap in batch correction approaches applicable to large scale, public omics databases and opens up new avenues for data driven analysis of complex biological processes beyond the scope of a single study.


2021 ◽  
Author(s):  
◽  
Benjamin Speedy

<p>Following devastating earthquakes in 2010 and 2011 in Christchurch, there is an opportunity to use sustainable urban design variables to redevelop the central city in order to address climate change concerns and reduce CO₂ emissions from land transport. Literature from a variety of disciplines establishes that four sustainable urban design variables; increased density, mixed-use development, street layout and city design, and the provision of sustainable public transport, can reduce car dependency and vehicle kilometres travelled within urban populations- widely regarded as indicators of the negative environmental effects of transport.  The key question for the research is; to what extent has this opportunity been seized by NZ’s Central Government who are overseeing the central city redevelopment? In order to explore this question the redevelopment plans for the central city of Christchurch are evaluated against an adapted urban design matrix to determine whether a reduction in CO₂ emissions from land transport is likely to be achieved through their implementation. Data obtained through interviews with experts is used to further explore the extent to which sustainable urban design variables can be employed to enhance sustainability and reduce CO₂ emissions.  The analysis of this data shows that the four urban design variables will feature in the Central Government’s redevelopment plans although the extent to which they are employed and their likely success in reducing CO₂ emissions will vary. Ultimately, the opportunity to redevelop the central city of Christchurch to reduce CO₂ emissions from land transport will be undermined due to timeframe, co-ordination, and leadership barriers.</p>


2021 ◽  
Author(s):  
◽  
Benjamin Speedy

<p>Following devastating earthquakes in 2010 and 2011 in Christchurch, there is an opportunity to use sustainable urban design variables to redevelop the central city in order to address climate change concerns and reduce CO₂ emissions from land transport. Literature from a variety of disciplines establishes that four sustainable urban design variables; increased density, mixed-use development, street layout and city design, and the provision of sustainable public transport, can reduce car dependency and vehicle kilometres travelled within urban populations- widely regarded as indicators of the negative environmental effects of transport.  The key question for the research is; to what extent has this opportunity been seized by NZ’s Central Government who are overseeing the central city redevelopment? In order to explore this question the redevelopment plans for the central city of Christchurch are evaluated against an adapted urban design matrix to determine whether a reduction in CO₂ emissions from land transport is likely to be achieved through their implementation. Data obtained through interviews with experts is used to further explore the extent to which sustainable urban design variables can be employed to enhance sustainability and reduce CO₂ emissions.  The analysis of this data shows that the four urban design variables will feature in the Central Government’s redevelopment plans although the extent to which they are employed and their likely success in reducing CO₂ emissions will vary. Ultimately, the opportunity to redevelop the central city of Christchurch to reduce CO₂ emissions from land transport will be undermined due to timeframe, co-ordination, and leadership barriers.</p>


Sign in / Sign up

Export Citation Format

Share Document