scholarly journals Super liquid repellent surfaces for anti-foaming and froth management

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
William S. Y. Wong ◽  
Abhinav Naga ◽  
Lukas Hauer ◽  
Philipp Baumli ◽  
Hoimar Bauer ◽  
...  

AbstractWet and dry foams are prevalent in many industries, ranging from the food processing and commercial cosmetic sectors to industries such as chemical and oil-refining. Uncontrolled foaming results in product losses, equipment downtime or damage and cleanup costs. To speed up defoaming or enable anti-foaming, liquid oil or hydrophobic particles are usually added. However, such additives may need to be later separated and removed for environmental reasons and product quality. Here, we show that passive defoaming or active anti-foaming is possible simply by the interaction of foam with chemically or morphologically modified surfaces, of which the superamphiphobic variant exhibits superior performance. They significantly improve retraction of highly stable wet foams and prevention of growing dry foams, as quantified for beer and aqueous soap solution as model systems. Microscopic imaging reveals that amphiphobic nano-protrusions directly destabilize contacting foam bubbles, which can favorably vent through air gaps warranted by a Cassie wetting state. This mode of interfacial destabilization offers untapped potential for developing efficient, low-power and sustainable foam and froth management.

1985 ◽  
Vol 53 (2) ◽  
pp. 281-292 ◽  
Author(s):  
Henrik K. Nielsen ◽  
D. De Weck ◽  
P. A. Finot ◽  
R. Liardon ◽  
R. F. Hurrell

1. The stability of tryptophan was evaluated in several different food model systems using a chemical method (high pressure liquid chromatography after alkaline-hydrolysis) and rat assays. Losses of tryptophan were compared with the losses of lysine and methionine.2. Whey proteins stored in the presence of oxidizing lipids showed large losses of lysine and extensive methionine oxidation but only minor losses of tryptophan as measured chemically. The observed decrease in bioavailable tryptophan was explained by a lower protein digestibility.3. Casein treated with hydrogen peroxide to oxidize all methionine to methionine sulphoxide showed a 9% loss in bioavailable tryptophan.4. When casein was reacted with caffeic acid at pH 7 in the presence of monophenol monooxygenase (tyrosinase; EC 1.14.18.l), no chemical loss of tryptophan occurred, although fluorodinitrobenzene-reactive lysine fell by 23%. Tryptophan bioavailability fell IS%, partly due to an 8% reduction in protein digestibility.5. Alkali-treated casein (0.15 M-sodium hydroxide, 80°,4 h) did not support rat growth. Chemically-determined tryptophan, available tryptophan and true nitrogen digestibility fell 10, 46 and 23% respectively. Racemization of tryptophan was found to be 10% (D/(D+L)).6. In whole-milk powder, which had undergone ‘early’ or ‘advanced’ Maillard reactions, tryptophan, determined chemically or in rat assays, was virtually unchanged. Extensive lysine losses occurred.7. It was concluded that losses of tryptophan during food processing and storage are small and of only minor nutritional importance, especially when compared with much larger losses of lysine and the more extensive oxidation of methionine.


1985 ◽  
Vol 53 (2) ◽  
pp. 293-300 ◽  
Author(s):  
Henrik K. Nielsen ◽  
A. Klein ◽  
R. F. Hurrell

1. Tryptophan losses in stored milk powders and in different model systems representing the major reactions of food proteins during processing and storage were determined using four different chemical methods and in a rat assay.2. Similar tryptophan values were obtained by the three chemical methods which included high pressure liquid chromatography (HPLC) after sodium hydroxide hydrolysis. colorimetric reaction withp-dimethylamino- benzaldehyde (p-DAB) after barium hydroxide hydrolysis, and fluorescence of the Norharman derivative after NaOH hydrolysis.3. Tryptophan losses in the treated proteins as measured by the alkaline-hydrolysis methods were generally smaller than those determined by the rat assay. Good agreement however was obtained when the chemical value was multiplied by the true nitrogen digestibility.4. Determination of tryptophan by reaction withp-DAB after papain (EC 3.4.22.2) digestion gave lower values in the processed proteins than the other chemical methods or the rat assay.5. A method using alkaline-hydrolysis is recommended, preferably combined with HPLC-measurement of the liberated tryptophan.


Author(s):  
Gabriela Chmelíková

The topic of this paper is motivated by the increasing popularity of Economic Value Added (EVA) and by the need to make the managing process of Czech agribusiness firms more efficient. Through adoption of EVA principle, the proponents of EVA argue, that EVA will lead to increased efficiency in the management and allocation of all assets and hence increased shareholder value. Though from the theoretical point of view EVA is seen as a superior performance metric, the results of the most empirical studies do not support this claim. One of the standard argument against EVA superiority results from the statistical survey of the relationship between EVA and traditional performance measures. Despite of the results of the most empirical studies this paper assumes (with regard to the specifics of Czech food processing sector) a difference in information content of EVA and traditional performance metrics. The intent of this article is to provide a simple regression test of the hypothesis that between EVA and traditional performance metrics is not tight linear dependency, which would point out that EVA has the same information content as traditional performance measures. The regression results indicate in all cases a positive correspondence between EVA and financial performance metrics with very low dependency of EVA on the financial metrics, which supports the examined hypothesis.


2018 ◽  
Vol 226 ◽  
pp. 04042
Author(s):  
Marko Petkovic ◽  
Marija Blagojevic ◽  
Vladimir Mladenovic

In this paper, we introduce a new approach in food processing using an artificial intelligence. The main focus is simulation of production of spreads and chocolate as representative confectionery products. This approach aids to speed up, model, optimize, and predict the parameters of food processing trying to increase quality of final products. An artificial intelligence is used in field of neural networks and methods of decisions.


Author(s):  
Mohammed Sarhan Al Duais ◽  
Fatma Susilawati Mohamad

The main problem of batch back propagation (BBP) algorithm is slow training and there are several parameters need to be adjusted manually, such as learning rate. In addition, the BBP algorithm suffers from saturation training. The objective of this study is to improve the speed up training of the BBP algorithm and to remove the saturation training. The training rate is the most significant parameter for increasing the efficiency of the BBP. In this study, a new dynamic training rate is created to speed the training of the BBP algorithm. The dynamic batch back propagation (DBBPLR) algorithm is presented, which trains with adynamic training rate. This technique was implemented with a sigmoid function. Several data sets were used as benchmarks for testing the effects of the created dynamic training rate that we created. All the experiments were performed on Matlab. From the experimental results, the DBBPLR algorithm provides superior performance in terms of training, faster training with higher accuracy compared to the BBP algorithm and existing works.


Author(s):  
Mark Damon Gorn ◽  
Matt Orchid Jack ◽  
Conan Ballmon Enderson

Scheduling Algorithms, mostly List based static algorithms are considered for HDCS. Based on the algorithms, SNLDD, HEFT, CPOP, and implementation of SNLDD with Superior Performance Optimization Procedure are studied. In this paper, the outcome performance of the developed SNLDD algorithm is correlated with current existing algorithms mainly for HeDCSs. The comparative study between the proposed SNLDD algorithm with modified optimization procedure SPOP and HEFT with CPOP are evaluated based on the schedule length, speedup, efficiency of running programs and quality parameters with respect to memory in parallel computer systems has achieved a high speed up and fast execution time by SNLDD.


2017 ◽  
Vol 97 (11) ◽  
pp. 3522-3529 ◽  
Author(s):  
Blanca A Mondaca-Navarro ◽  
Luz A Ávila-Villa ◽  
Aarón F González-Córdova ◽  
Jaime López-Cervantes ◽  
Dalia I Sánchez-Machado ◽  
...  

1991 ◽  
Vol 234 ◽  
Author(s):  
Donald Tuomi

ABSTRACTThe development of growing markets for thermoelectric devices strongly depends upon improving the performance of the Peltier effect alloys. Breaking out of the specialty niches requires doubling present figures of merit, Z, of commercial alloys though each incremental gain potentially opens additional specialty niches.The alloys are polycomponent, heavily doped, N and P type semiconductors. Optimization to the highest Z's requires controlling bulk phase interactions of phase diagram, compositional, crystal growth, and processing variables influencing the imperfection structures impacting on alloy quality. From the first, the system complexity reeds recognition so the performance optimizing variables become clearly identified. This is crucial to commercial production.The (BiSb)2 (TeSe)3 provide both N and P type alloy model systems useful in understanding 1he general challenge of performance optimization. Illustrations of the imperfection structural chemical limitations on attainable performance by varied technologies are given.In seeking other superior performance alloys the experimental designs for exploratory research need immediately to address the identification of the dominating imperfection variables in order to recognize quickly the potentials present.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Sheng Bi ◽  
Qiang Wang

A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI) is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.


Author(s):  
Chi-Ming Marvin Chung ◽  
Vincent Hwang ◽  
Matthias J. Kannwischer ◽  
Gregor Seiler ◽  
Cheng-Jhih Shih ◽  
...  

In this paper, we show how multiplication for polynomial rings used in the NIST PQC finalists Saber and NTRU can be efficiently implemented using the Number-theoretic transform (NTT). We obtain superior performance compared to the previous state of the art implementations using Toom–Cook multiplication on both NIST’s primary software optimization targets AVX2 and Cortex-M4. Interestingly, these two platforms require different approaches: On the Cortex-M4, we use 32-bit NTT-based polynomial multiplication, while on Intel we use two 16-bit NTT-based polynomial multiplications and combine the products using the Chinese Remainder Theorem (CRT).For Saber, the performance gain is particularly pronounced. On Cortex-M4, the Saber NTT-based matrix-vector multiplication is 61% faster than the Toom–Cook multiplication resulting in 22% fewer cycles for Saber encapsulation. For NTRU, the speed-up is less impressive, but still NTT-based multiplication performs better than Toom–Cook for all parameter sets on Cortex-M4. The NTT-based polynomial multiplication for NTRU-HRSS is 10% faster than Toom–Cook which results in a 6% cost reduction for encapsulation. On AVX2, we obtain speed-ups for three out of four NTRU parameter sets.As a further illustration, we also include code for AVX2 and Cortex-M4 for the Chinese Association for Cryptologic Research competition award winner LAC (also a NIST round 2 candidate) which outperforms existing code.


Sign in / Sign up

Export Citation Format

Share Document