scholarly journals Improving Formaldehyde Removal from Water and Wastewater by Fenton, Photo-Fenton and Ozonation/Fenton Processes through Optimization and Modeling

Water ◽  
2021 ◽  
Vol 13 (19) ◽  
pp. 2754
Author(s):  
Ahmad Hosseinzadeh ◽  
Ali Asghar Najafpoor ◽  
Ali Asghar Navaei ◽  
John L. Zhou ◽  
Ali Altaee ◽  
...  

This study aimed to assess, optimize and model the efficiencies of Fenton, photo-Fenton and ozonation/Fenton processes in formaldehyde elimination from water and wastewater using the response surface methodology (RSM) and artificial neural network (ANN). A sensitivity analysis was used to determine the importance of the independent variables. The influences of different variables, including H2O2 concentration, initial formaldehyde concentration, Fe dosage, pH, contact time, UV and ozonation, on formaldehyde removal efficiency were studied. The optimized Fenton process demonstrated 75% formaldehyde removal from water. The best performance with 80% formaldehyde removal from wastewater was achieved using the combined ozonation/Fenton process. The developed ANN model demonstrated better adequacy and goodness of fit with a R2 of 0.9454 than the RSM model with a R2 of 0. 9186. The sensitivity analysis showed pH as the most important factor (31%) affecting the Fenton process, followed by the H2O2 concentration (23%), Fe dosage (21%), contact time (14%) and formaldehyde concentration (12%). The findings demonstrated that these treatment processes and models are important tools for formaldehyde elimination from wastewater.

2015 ◽  
Vol 7 (4) ◽  
pp. 443-448
Author(s):  
Dovilė Kulikauskaitė ◽  
Dainius Paliulis

Formaldehyde is one of the most chemically active compounds which is discharged with untreated or just partially treated industrial wastewater. It is hazardous for environment and humans. Formaldehyde vapors can strongly irritate skin, can cause damage to eyes and harm respiratory tract. As long as formaldehyde causes a toxic effect on environment and living organisms, it is necessary to remove it from wastewater which is directed to natural water. There are many methods used for formaldehyde removal from wastewater: biological method, evaporation, membrane separation method. Most of them have disadvantages. Adsorption method has many advantages: it is fast, cheap, and universal, and can be widely used, therefore it was chosen for this research. Experiment was carried out with natural zeolite in different contact time with different concentration formaldehyde solutions. Concentration of formaldehyde was determined applying the Photocolorimetric Method. Method is based on reaction of formaldehyde with chromotropic acid and determination of formaldehyde concentration. Determined average sorption efficiency was highest when formaldehyde concentration was lowest, e. g. 2 mg/l (45.94%) after eight hours of contact time with adsorbent. Sorption efficiency was increasing when the contact time increased, but when the contact time increased to 12 hours, sorption efficiency stayed the same because of the saturation of zeolite. Formaldehidas yra vienas iš aktyviausių junginių, kuris išleidžiamas į aplinką kartu su nevalytomis ar iš dalies išvalytomis gamybinėmis nuotekomis. Jis yra pavojingas tiek aplinkai, tiek žmonėms. Formaldehido garai stipriai dirgina akis ir kvėpavimo sistemą. Kadangi formaldehidas yra pavojingas žmonėms ir visiems gyviems organizmams, jis turi būti šalinamas iš gamybinių nuotekų. Sorbcijos metodas turi daug privalumų: jis yra greitas, pigus ir universalus, todėl vienas iš labiausiai perspektyvių vandens valymo metodų – sorbcija. Tai pagrindinė priežastis, kodėl sorbcinis metodas buvo pasirinktas eksperimentiniams tyrimams. Eksperimentiniai tyrimai buvo atlikti naudojat gamtinį ceolitą, buvo parinktas skirtingas formaldehido tirpalo kontakto laikas su adsorbentu ir matuojama teršalų koncentracija po kontakto su adsorbentu. Šis metodas yra paremtas formaldehido reakcija su chromotropine rūgštimi. Sorbcijos efektyvumas augo ilgėjant kontakto su ceolitu laikui, tačiau po 12 valandų efektyvumas nebedidėjo dėl to, kad sorbentas įsisotino.


Membranes ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 70
Author(s):  
Jasir Jawad ◽  
Alaa H. Hawari ◽  
Syed Javaid Zaidi

The forward osmosis (FO) process is an emerging technology that has been considered as an alternative to desalination due to its low energy consumption and less severe reversible fouling. Artificial neural networks (ANNs) and response surface methodology (RSM) have become popular for the modeling and optimization of membrane processes. RSM requires the data on a specific experimental design whereas ANN does not. In this work, a combined ANN-RSM approach is presented to predict and optimize the membrane flux for the FO process. The ANN model, developed based on an experimental study, is used to predict the membrane flux for the experimental design in order to create the RSM model for optimization. A Box–Behnken design (BBD) is used to develop a response surface design where the ANN model evaluates the responses. The input variables were osmotic pressure difference, feed solution (FS) velocity, draw solution (DS) velocity, FS temperature, and DS temperature. The R2 obtained for the developed ANN and RSM model are 0.98036 and 0.9408, respectively. The weights of the ANN model and the response surface plots were used to optimize and study the influence of the operating conditions on the membrane flux.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e18012-e18012
Author(s):  
Karthik Ramakrishnan ◽  
Ali Mojebi ◽  
Dieter Ayers ◽  
Diana Romana Chirovsky ◽  
Rebekah Borse ◽  
...  

e18012 Background: In the KEYNOTE-048 trial, pembrolizumab as monotherapy (P) and in combination with platinum+5FU chemotherapy (P+C) versus cetuximab+platinum+5FU (EXTREME regimen) significantly improved overall survival (OS) in the combined positive score (CPS) ≥1 (hazard ratio: 0.74; 95% confidence interval: 0.61-0.90) and total (0.72; 0.60-0.87) R/M HNSCC populations, respectively, and was approved by the FDA in these patient populations. While the EXTREME regimen is considered standard of care in 1L R/M HNSCC, other systemic treatment options including cetuximab+platinum+docetaxel (TPEx regimen), platinum+paclitaxel/taxane (Pt+T), and platinum+5FU (Pt+F) are also commonly used. Due to lack of head-to-head comparisons with pembrolizumab, an NMA was conducted to estimate the comparative efficacy of P and P+C versus these interventions in 1L R/M HNSCC. Methods: A systematic literature review (SLR) was conducted on November 13, 2019 to identify randomized controlled trials for the relavant interventions. Data were extracted for the OS and progression-free survival (PFS) outcomes. NMA analyses were conducted for the total population and for the CPS ≥1 and CPS ≥20 subgroups in a Bayesian framework using proportional hazards (base case) and time-varying (sensitivity analysis) treatment-effect models. The deviance information criterion was used to compare the goodness-of-fit of the alternative survival models. Results: The SLR identified 28 trials, of which six trials matched the trial eligibility criteria of KEYNOTE-048 and were included in the NMA. Results from the fixed-effects NMA for P and P+C are summarized in table below for the FDA indicated population. Improvement in OS was noted for P and P+C versus EXTREME, Pt+T, and Pt+F, and a trend in improved OS versus TPEx was observed. The sensitivity analysis showed improved OS over time across all comparisons. PFS was improved with P and P+C versus Pt+F and comparable versus other interventions. These results were generally consistent for P and P+C in the CPS (CPS ≥1 or CPS ≥20) patient subgroups. Additionally, NMA results versus EXTREME were consistent with the KEYNOTE-048 trial results. Conclusions: Pembrolizumab (P or P+C), showed improved OS and comparable PFS outcomes versus alternative 1L R/M HNSCC interventions, consistent with the efficacy results versus EXTREME observed in the KEYNOTE-048 trial. [Table: see text]


2013 ◽  
Vol 69 (4) ◽  
pp. 768-774 ◽  
Author(s):  
André L. N. Mota ◽  
Osvaldo Chiavone-Filho ◽  
Syllos S. da Silva ◽  
Edson L. Foletto ◽  
José E. F. Moraes ◽  
...  

An artificial neural network (ANN) was implemented for modeling phenol mineralization in aqueous solution using the photo-Fenton process. The experiments were conducted in a photochemical multi-lamp reactor equipped with twelve fluorescent black light lamps (40 W each) irradiating UV light. A three-layer neural network was optimized in order to model the behavior of the process. The concentrations of ferrous ions and hydrogen peroxide, and the reaction time were introduced as inputs of the network and the efficiency of phenol mineralization was expressed in terms of dissolved organic carbon (DOC) as an output. Both concentrations of Fe2+ and H2O2 were shown to be significant parameters on the phenol mineralization process. The ANN model provided the best result through the application of six neurons in the hidden layer, resulting in a high determination coefficient. The ANN model was shown to be efficient in the simulation of phenol mineralization through the photo-Fenton process using a multi-lamp reactor.


Energies ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 571 ◽  
Author(s):  
Azadeh Sadeghi ◽  
Roohollah Younes Sinaki ◽  
William A. Young ◽  
Gary R. Weckman

As the level of greenhouse gas emissions increases, so does the importance of the energy performance of buildings (EPB). One of the main factors to measure EPB is a structure’s heating load (HL) and cooling load (CL). HLs and CLs depend on several variables, such as relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution. This research uses deep neural networks (DNNs) to forecast HLs and CLs for a variety of structures. The DNNs explored in this research include multi-layer perceptron (MLP) networks, and each of the models in this research was developed through extensive testing with a myriad number of layers, process elements, and other data preprocessing techniques. As a result, a DNN is shown to be an improvement for modeling HLs and CLs compared to traditional artificial neural network (ANN) models. In order to extract knowledge from a trained model, a post-processing technique, called sensitivity analysis (SA), was applied to the model that performed the best with respect to the selected goodness-of-fit metric on an independent set of testing data. There are two forms of SA—local and global methods—but both have the same purpose in terms of determining the significance of independent variables within a model. Local SA assumes inputs are independent of each other, while global SA does not. To further the contribution of the research presented within this article, the results of a global SA, called state-based sensitivity analysis (SBSA), are compared to the results obtained from a traditional local technique, called sensitivity analysis about the mean (SAAM). The results of the research demonstrate an improvement over existing conclusions found in literature, which is of particular interest to decision-makers and designers of building structures.


Water polluted with microorganisms and pathogens is one of the most significant hazards to public health. Potential microorganisms unsafe to human health can be destroyed through effective disinfection. To stop the re-growth of microorganisms, it is also advisable to take care of the residual disinfectant in the water distribution networks. The most frequently used cleanser material is chlorine. When the chlorine dosage is too low, there will be a deficiency of enough residues at the end of the water network system, leading to re-growth of microorganisms. Addition of an excessive amount of chlorine will lead to corrosion of the pipeline network and also the development of disinfection by-products (DBPs) including carcinogens. Thus, to determine the best rate of chlorine dosage, it is essential to model the system to forecast chlorine decay within the network. In this research study, two major modeling and optimization strategies were employed to assess the optimum dosage of chlorine for municipal water disinfection and also to predict residual chlorine at any predetermined node within the water distribution network. Artificial neural network (ANN) modeling techniques were used to forecast chlorine concentrations in different nodes in the urban water distribution system in Muscat, the capital of the Sultanate of Oman. One-year dataset from one of the distribution system was used for conducting network modeling in this study. The input factors to RSM model considered were pH, chlorine dosage and time. Response variables for RSM model were fixed as total organic carbon (TOC), Biological oxygen demand (BOD) and residual chlorine An Artificial neural network (ANN) model for residual chlorine was created with pH, inlet-concentration of chlorine and initial temperature as input parameters and residual chlorine in the piping network as an output parameter. The ANN model created using these data can be employed to forecast the residual chlorine value in the urban water network at any given specific location. The results from this study utilizing the uniqueness of an ANN model to predict residual chlorine and water quality parameters have the potential to detect complex, higher-order behavior between input and output parameters exist in urban water distribution system.


2019 ◽  
Vol 79 (7) ◽  
pp. 1367-1375
Author(s):  
Amir Ikhlaq ◽  
Hafiza Zara Anwar ◽  
Farhan Javed ◽  
Saba Gull

Abstract Today, dyes are one of the major problematic pollutants in the environment and are broadly used in several industrial sectors. In the current research work, decolorization of safranin (basic dye) from aqueous solution was investigated using iron-impregnated peanut shell ash (Fe-PSA) as a catalyst in the UV-assisted heterogeneous Fenton process (Fe-PSA/H2O2/UV). The effect of parameters such as H2O2 concentration, catalyst dose, pH, initial dye concentration, temperature, and agitation speed was studied. The maximum decolorization of safranin was achieved at optimum parametric values of reagent dose = 8 mM, catalyst dose = 0.5 g, pH = 3, initial concentration of safranin = 50 ppm, temperature = 25 °C, and agitation speed = 200 rpm. The results revealed the efficient performance of Fe-PSA as catalyst in the Fe-PSA/H2O2/UV process for safranin treatment.


Processes ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 174
Author(s):  
Pavlos Kotidis ◽  
Cleo Kontoravdi

Global Sensitivity Analysis (GSA) is a technique that numerically evaluates the significance of model parameters with the aim of reducing the number of parameters that need to be estimated accurately from experimental data. In the work presented herein, we explore different methods and criteria in the sensitivity analysis of a recently developed mathematical model to describe Chinese hamster ovary (CHO) cell metabolism in order to establish a strategic, transferable framework for parameterizing mechanistic cell culture models. For that reason, several types of GSA employing different sampling methods (Sobol’, Pseudo-random and Scrambled-Sobol’), parameter deviations (10%, 30% and 50%) and sensitivity index significance thresholds (0.05, 0.1 and 0.2) were examined. The results were evaluated according to the goodness of fit between the simulation results and experimental data from fed-batch CHO cell cultures. Then, the predictive capability of the model was tested against four different feeding experiments. Parameter value deviation levels proved not to have a significant effect on the results of the sensitivity analysis, while the Sobol’ and Scrambled-Sobol’ sampling methods and a 0.1 significance threshold were found to be the optimum settings. The resulting framework was finally used to calibrate the model for another CHO cell line, resulting in a good overall fit. The results of this work set the basis for the use of a single mechanistic metabolic model that can be easily adapted through the proposed sensitivity analysis method to the behavior of different cell lines and therefore minimize the experimental cost of model development.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 450 ◽  
Author(s):  
Kwang Yoon Song ◽  
In Hong Chang ◽  
Hoang Pham

We have been attempting to evaluate software quality and improve its reliability. Therefore, research on a software reliability model was part of the effort. Currently, software is used in various fields and environments; hence, one must provide quantitative confidence standards when using software. Therefore, we consider the testing coverage and uncertainty or randomness of an operating environment. In this paper, we propose a new testing coverage model based on NHPP software reliability with the uncertainty of operating environments, and we provide a sensitivity analysis to study the impact of each parameter of the proposed model. We examine the goodness-of-fit of a new testing coverage model based on NHPP software reliability and other existing models based on two datasets. The comparative results for the goodness-of-fit show that the proposed model does significantly better than the existing models. In addition, the results for the sensitivity analysis show that the parameters of the proposed model affect the mean value function.


Author(s):  
Christopher M. Day ◽  
Howell Li ◽  
Lucy M. Richardson ◽  
James Howard ◽  
Tom Platte ◽  
...  

Signal offset optimization recently has been shown to be feasible with vehicle trajectory data at low levels of market penetration. Offset optimization was performed on two corridors with that type of data. A proposed procedure called “virtual detection” was used to process 6 weeks of trajectory splines and create vehicle arrival profiles for two corridors, comprising 25 signalized intersections. After data were processed and filtered, penetration rates between 0.09% and 0.80% were observed, with variations by approach. Then those arrival profiles were compared statistically with those measured with physical detectors, and most approaches showed statistically significant goodness of fit at a 90% confidence level. Finally, the arrival profiles created with virtual detection were used to optimize offsets and compared with a solution derived from arrival profiles obtained with physical detectors. Results demonstrate that virtual detection can produce good-quality offsets with current market penetration rates of probe data. In addition, a sensitivity analysis of the sampling period indicated that 2 weeks may be sufficient for data collection at current penetration rates.


Sign in / Sign up

Export Citation Format

Share Document