experimental runs
Recently Published Documents


TOTAL DOCUMENTS

163
(FIVE YEARS 59)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
pp. 089270572110530
Author(s):  
Nagarjuna Maguluri ◽  
Gamini Suresh ◽  
K Venkata Rao

Fused deposition modeling (FDM) is a fast-expanding additive manufacturing technique for fabricating various polymer components in engineering and medical applications. The mechanical properties of components printed with the FDM method are influenced by several process parameters. In the current work, the influence of nozzle temperature, infill density, and printing speed on the tensile properties of specimens printed using polylactic acid (PLA) filament was investigated. With an objective to achieve better tensile properties including elastic modulus, tensile strength, and fracture strain; Taguchi L8 array has been used for framing experimental runs, and eight experiments were conducted. The results demonstrate that the nozzle temperature significantly influences the tensile properties of the FDM printed PLA products followed by infill density. The optimum processing parameters were determined for the FDM printed PLA material at a nozzle temperature of 220°C, infill density of 100%, and printing speed of 20 mm/s.


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


2021 ◽  
Author(s):  
Michele Lustrino ◽  
et al.

Experimental and analytical procedures, including SEM images of the experimental runs, and a data set containing the entire set of EMP analyses of glass and quenched minerals, as well as the original composition of the starting materials and additional plots.<br>


2021 ◽  
Author(s):  
Michele Lustrino ◽  
et al.

Experimental and analytical procedures, including SEM images of the experimental runs, and a data set containing the entire set of EMP analyses of glass and quenched minerals, as well as the original composition of the starting materials and additional plots.<br>


Author(s):  
Peter Christian Endler ◽  
Bernhard Harrer

Introduction: In the course of more than two decades of experimental work on a model with amphibians and extremely diluted thyroxine, one experiment in particular, investigating the effect of an ultra-high dilution of thyroxine (T30x) v analogously prepared water (W30x) in amphibians from highland biotopes, was found to be reproducible. A total of 22 experimental runs were performed between 1990 and 2011, 15 by the initial researchers and 7 by altogether 5 independent researchers (1-5). In most of these (the sole exception being two performed and reported by the initial team) a trend was found of T30x-animals developing more slowly than W30x-animals. Pooled T30x values obtained by the initial team were 10.1% lower than W30x values (100%) (p < 0.01), and pooled T30x values from the 5 independent researchers were 12.4% lower (p < 0.01). The purpose of this study was to test the hypothesis that storing the animals at 4°C prior to the experiment does not influence (i.e. inhibit) the effect of T30x. Cooling here seemed to be a promising means of facilitating the transport of the highland larvae to laboratories and of synchronizing experiments. Methods: The original protocol was followed, but animals were stored at 4°C for several days prior to the experiment. Results: In contrast to the majority of previous experiments, no clear trend was found of T30x values being different from W30x values, i.e. of animals developing more slowly under the influence of T30x (p > 0.05). Conclusion: This experiment failed to reproduce the previously observed inhibiting effect of ultra-high diluted thyroxine on highland amphibians. The hypothesis that storage of the animals at 4°C does not influence the effect of T30x could not be proven; in contrast, it may be that this intermediate cooling down of the larvae is responsible for the failure of the replication.


2021 ◽  
Vol 12 (2) ◽  
pp. 392-400
Author(s):  
Henry Okwudili Chibudike ◽  
Nwaebuni Ebube Odega ◽  
Eunice Chinedum Chibudike ◽  
Olubamike Adetutu Adeyoju ◽  
Nkemdilim Ifeanyi Obi

In this research work, the effect of three (3) pulping additives such as polysulfide, Anthraquinone and surfactant used in the monoethanolamine (MEA) pulping of agro-biomass, their possible interactions and synergy effect on pulp screened yield were investigated. The pulping conditions of the digester were adjusted so that the experimental design considered the following factors and levels: 75% MEA charge, 150oC cooking temperature, 90minutes cooking time, 4 to 1Liquor- Biomass ratio. Factor 1: 0, 0.25 and 0.5% Surfactant charge, Factor 2: 0, 2.0 and 4.0% polysulfide charge, Factor 3: 0, 0.25 and 0.5% anthraquinone charge. The Agro-biomass was evaluated in terms of pulp screened yield. Heating time ranged from 5 to 45minutes and maximum cooking time did not exceed 90minutes, liquor biomass ratio was 4 to 1 and Liquor charge was 75% MEA. The yield for MEA with 4% Polysulfide (PS) dosage without the inclusion of Surfactant and Anthraquinone was highest (59.08%) in all the twenty seven (27) experimental runs, but furnished the highest reject (12.26%) and an unimpressive screened yield of 46.82% standing amongst the least possible outcomes. The yield for MEA with 0.25% Anthraquinone (AQ) dosage without the inclusion of Surfactant and Polysulfide furnished a total yield of 50.32%, pulp screened yield of 50.03% with a minimal reject of 0.29 showing to be more efficient than the use of polysulfide. Monoethanolamine (MEA) pulping with 0.5% surfactant (Surf.) dosage without the inclusion of other additives i.e. AQ and PS achieved 51.12% total screened yield with a reject of only 0.33% furnishing the highest pulp screened yield (50.79%) thereby showing more efficiency amongst the three (3) pulping additives investigated in this research study base on single use. However, the result obtained from the combination of the three (3) pulping additives furnished the highest screened yield (52.43) with 4.23% reject in scenario E, experiment No. 15, involving the combination of 0.25% surfactant, 0.25% anthraquinone and 4% polysulfide charge showing the best synergistic effect. Although the highest screened yield (53.04% and the least reject (0.13%) indicating the best possible outcome amongst the entire twenty seven (27) experimental runs came from the combination of 0.25% surfactant and 2% polysulfide charge. If we have to consider the use of surfactant and polysulfide alone, the best possible outcome came from the combination of 0.25% surfactant and 2% polysulfide charge in experiment 20 of scenario G. Analyses of the overall experimental results show that there is considerable advantage and a positive synergy effect in the use of additives in pulping operation.


Plants ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 2505
Author(s):  
Amelia A. Limbongan ◽  
Shane D. Campbell ◽  
Victor J. Galea

Mimosa bush (Vachellia farnesiana) is an invasive woody weed widely distributed in Australia. While it can be controlled using several mechanical and chemical techniques, this study evaluated a novel herbicide delivery mechanism that minimizes the risk of spray drift and potential non-target damage. This method, developed by Bioherbicides Australia, involves the implantation of encapsulated granular herbicides into the stem of intact plants or into the stump after cutting off plants close to ground level (cut stumps). Trials were implemented near Moree (New South Wales, Australia) on intact (two experimental runs) plants and cut stumped (two experimental runs) plants. For each trial, an untreated control plus the conventional basal bark application of a liquid formulation of triclopyr + picloram mixed with diesel was included for comparison. Encapsulated glyphosate, aminopyralid + metsulfuron-methyl, hexazinone and clopyralid were also tested in all trials. In addition, encapsulated triclopyr + picloram, and metsulfuron-methyl were included in one of the intact plant trials. Aminopyralid + metsulfuron-methyl was consistently most effective on cut stump and intact plants, whilst clopyralid provided highest mortality when applied to cut stumps and single-stemmed intact plants. Particularly for multi-stemmed intact plants, clopyralid should be applied to each stem. Overall, the highest efficacy was achieved on single stemmed plants, but with further refinement of the technique, it should be possible to achieve similar results for multi-stemmed individuals. This method resulted in a reduction in the use of herbicide and environmental contamination while significantly improving speed of treatment.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 2851-2851
Author(s):  
Kim G. Hankey ◽  
Tim Luetkens ◽  
Stephanie Avila ◽  
John McLenithan ◽  
John Braxton ◽  
...  

Abstract Introduction Chimeric Antigen Receptor (CAR) T-cell therapy has emerged as a powerful immunotherapy for various forms of cancer, especially hematologic malignancies. However, several factors limit use of CAR T-cells to a wider number of patients. Long manufacturing time (usually 3-4 weeks with standard of care products) poses a big challenge in treating these chemorefractory patients in a timely fashion. Thus, we evaluated the feasibility of a fresh in and fresh out, short, eight-day manufacturing process performed locally to expedite CAR T-cell drug product delivery. Herein we report the results of two experimental runs using this modified short eight-day culture process. Methods We used the CliniMACS Prodigy® closed manufacturing system and modified the 12-day T Cell Transduction (TCT) activity matrix protocol to produce anti-CD19 CAR T-cells in eight days. Normal donor mononuclear cells were collected by leukapheresis and enriched for CD4 and CD8 cells by immunomagnetic bead selection in three stages. Enriched T-cells were activated with MACS GMP T Cell TransACT and cultured at 37°C with 5% CO 2 for 16-24 hours in media supplemented with 12.5mcg/L each of IL-7 and IL-15, and 3% heat-inactivated human AB serum. On day 1 of the process, activated T-cells were transduced with lentiviral vector encoding the anti-CD19 CAR (Lentigen, LTG1563) at a multiplicity of infection (MOI) of 7-10. On day 3, the cells were washed twice and the media volume adjusted to feed the expanding cells. The culture was again fed on day 5 by exchanging half the volume of spent media with fresh supplemented media. Media supplemented with cytokines alone was used for the remaining four washes on day 6, 7 and 8. Transduction efficiency and T-cell subset frequencies were assessed by flow cytometry on the MACSQuant-10 and CAR-T Express Mode package on days 3, 6 and 8. Subsequently, we performed ELISPOT assay for CAR T-cell potency testing and in-vivo efficacy testing in NSG mice bearing Raji B cell lymphoma. Results Refer to Table 1 for details on cell populations of interest for experiment number 1 and 2. The total number of CD3 T-Cells increased from 97% on day 0 to &gt;99.5% on the harvest day (day 8). CD3 T-cells expanded 11.6- and 34.2-fold on day 8 when compared to day 0. Transduction efficiency of ~40% was observed in both experimental runs. Final CD19 CAR T-cells numbers ranged from 9.3-13.3 x 10e8 with viability of CD3+ cells &gt;93% for both the runs. Day 3 of the culture is an important day since a clinical decision to proceed with lymphodepletion must be made to facilitate the fresh in and fresh out approach. Here we observed reliable transduction of T-cells on day 3 with an average efficiency of 15.9%. Day 3 data reliably provided information to proceed with lymphodepletion. A total of 100,000 CD19 CAR T-cells produced in experiment #1 were exposed to beads coated with CD19 protein, BCMA control protein, or T cell-activating beads coated with anti-CD3 and anti-CD28 antibodies in an ELISPOT plate. Spots in figure 1 represents individual CAR T-cells producing IFN-gamma. This novel ELISPOT assay shows high IFN-gamma by CD19 CAR T-cells in response to the target antigen or unspecific stimulation using CD3/CD28 beads. Subsequently, NSG mice received injections of 5x10e5 Raji B cell lymphoma cells stably expressing luciferase into the tail vein. One week later, 4 mice per group received individual i.v. injections of 4x10e6 CD19 CAR T-cells, 0.3x10e6 CD19 CAR T-cells, 4x10e6 mock-transduced CAR T-cells, or media. Survival curves in figure 2 represent survival of the mice after receiving the treatment with best survival seen with 4x10e6 dose. Conclusions In these experimental runs, we were able to generate CD19 CAR+ T-cells in a short eight-day manufacturing process. The final product characteristics (viability, transduction efficiency and doses) were comparable to clinical formulations. Further, point-of-care potency assay suggests high IFN-gamma production and elimination of CD19 tumor in the in vivo murine model. The point-of-care CAR T-cell production allows for shorter vein-to-vein time and offers dramatic reduction in the product cost. Lastly, the novel potency assay via ELISPOT testing allows for rapid and visual functional analysis of the CAR T-cell product. Figure 1 Figure 1. Disclosures Hardy: Kite/Gilead: Membership on an entity's Board of Directors or advisory committees; American Gene Technologies, International: Membership on an entity's Board of Directors or advisory committees; InCyte: Membership on an entity's Board of Directors or advisory committees. Abramowski-Mock: Miltenyi Biotec: Current Employment. Mittelstaet: Miltenyi Biotec: Current Employment. Dudek: Miltenyi Biotec: Current Employment.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259350
Author(s):  
Aseesh Pandey ◽  
Tarun Belwal ◽  
Sushma Tamta ◽  
Ranbeer S. Rawal

In this study heat-assisted extraction conditions were optimized to enhance extraction yield of antioxidant polyphenols from leaves of Himalayan Quercus species. In initial experiments, a five-factor Plackett-Burman design including 12 experimental runs was tested against the total polyphenolic content (TP). Amongst, XA: extraction temperature, XC: solvent concentration and XE: sample-to-solvent ratio had shown significant influence on yield. These influential factors were further subject to a three-factor-three-level Box-Wilson Central Composite Design; including 20 experimental runs and 3D response surface methodology plots were used to determine optimum conditions [i.e. XA: (80°C), XC:(87%), XE: (1g/40ml)].This optimized condition was further used in other Quercus species of western Himalaya, India. The High-Performance Liquid Chromatography (HPLC) revealed occurrence of 12 polyphenols in six screened Quercus species with the highest concentration of catechin followed by gallic acid. Amongest, Q. franchetii and Q. serrata shared maximum numbers of polyphenolic antioxidants (8 in each). This optimized extraction condition of Quercus species can be utilized for precise quantification of polyphenols and their use in pharmaceutical industries as a potential substitute of synthetic polyphenols.


Sign in / Sign up

Export Citation Format

Share Document