Recent Patents on Nanoporous Filtration Membranes with Complex Pores

2020 ◽  
Vol 14 (2) ◽  
pp. 229-233
Author(s):  
Yongbin Zhang

Background:: The challenges to nanoporous filtration membranes are small fluxes and low membrane mechanical strengths. Objective:: To introduce newly invented nanoporous filtration membranes with complex pores, improved fluxes and mechanical strengths as registered in patents. Methods:: The analytical results are presented for the addressed membranes. Results:: The geometrical parameter values of the addressed membranes can be optimized for the highest fluxes. Conclusion:: The overall performances of nanoporous filtration membranes with complex cylindrical or/and conical pores can be significantly better than that of the conventional nanoporous filtration membranes with single cylindrical or conical pores.

Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 278
Author(s):  
Sanlong Jiang ◽  
Shaobo Li ◽  
Qiang Bai ◽  
Jing Yang ◽  
Yanming Miao ◽  
...  

A reasonable grasping strategy is a prerequisite for the successful grasping of a target, and it is also a basic condition for the wide application of robots. Presently, mainstream grippers on the market are divided into two-finger grippers and three-finger grippers. According to human grasping experience, the stability of three-finger grippers is much better than that of two-finger grippers. Therefore, this paper’s focus is on the three-finger grasping strategy generation method based on the DeepLab V3+ algorithm. DeepLab V3+ uses the atrous convolution kernel and the atrous spatial pyramid pooling (ASPP) architecture based on atrous convolution. The atrous convolution kernel can adjust the field-of-view of the filter layer by changing the convolution rate. In addition, ASPP can effectively capture multi-scale information, based on the parallel connection of multiple convolution rates of atrous convolutional layers, so that the model performs better on multi-scale objects. The article innovatively uses the DeepLab V3+ algorithm to generate the grasp strategy of a target and optimizes the atrous convolution parameter values of ASPP. This study used the Cornell Grasp dataset to train and verify the model. At the same time, a smaller and more complex dataset of 60 was produced according to the actual situation. Upon testing, good experimental results were obtained.


Solid Earth ◽  
2016 ◽  
Vol 7 (4) ◽  
pp. 1157-1169 ◽  
Author(s):  
Paul W. J. Glover

Abstract. When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the “tortuosity” or “lithology” parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.


This chapter delivers general format of higher order neural networks (HONNs) for nonlinear data analysis and six different HONN models. Then, this chapter mathematically proves that HONN models could converge and have mean squared errors close to zero. Moreover, this chapter illustrates the learning algorithm with update formulas. HONN models are compared with SAS nonlinear (NLIN) models, and results show that HONN models are 3 to 12% better than SAS nonlinear models. Finally, this chapter shows how to use HONN models to find the best model, order, and coefficients without writing the regression expression, declaring parameter names, and supplying initial parameter values.


2017 ◽  
Vol 24 (1) ◽  
pp. 110-141 ◽  
Author(s):  
Andrew W. Stevenson ◽  
Jeffrey C. Crosbie ◽  
Christopher J. Hall ◽  
Daniel Häusermann ◽  
Jayde Livingstone ◽  
...  

A critical early phase for any synchrotron beamline involves detailed testing, characterization and commissioning; this is especially true of a beamline as ambitious and complex as the Imaging & Medical Beamline (IMBL) at the Australian Synchrotron. IMBL staff and expert users have been performing precise experiments aimed at quantitative characterization of the primary polychromatic and monochromatic X-ray beams, with particular emphasis placed on the wiggler insertion devices (IDs), the primary-slit system and any in vacuo and ex vacuo filters. The findings from these studies will be described herein. These results will benefit IMBL and other users in the future, especially those for whom detailed knowledge of the X-ray beam spectrum (or `quality') and flux density is important. This information is critical for radiotherapy and radiobiology users, who ultimately need to know (to better than 5%) what X-ray dose or dose rate is being delivered to their samples. Various correction factors associated with ionization-chamber (IC) dosimetry have been accounted for, e.g. ion recombination, electron-loss effects. A new and innovative approach has been developed in this regard, which can provide confirmation of key parameter values such as the magnetic field in the wiggler and the effective thickness of key filters. IMBL commenced operation in December 2008 with an Advanced Photon Source (APS) wiggler as the (interim) ID. A superconducting multi-pole wiggler was installed and operational in January 2013. Results are obtained for both of these IDs and useful comparisons are made. A comprehensive model of the IMBL has been developed, embodied in a new computer program named spec.exe, which has been validated against a variety of experimental measurements. Having demonstrated the reliability and robustness of the model, it is then possible to use it in a practical and predictive manner. It is hoped that spec.exe will prove to be a useful resource for synchrotron science in general, and for hard X-ray beamlines, whether they are based on bending magnets or insertion devices, in particular. In due course, it is planned to make spec.exe freely available to other synchrotron scientists.


2019 ◽  
Vol 147 (5) ◽  
pp. 1699-1712 ◽  
Author(s):  
Bo Christiansen

Abstract In weather and climate sciences ensemble forecasts have become an acknowledged community standard. It is often found that the ensemble mean not only has a low error relative to the typical error of the ensemble members but also that it outperforms all the individual ensemble members. We analyze ensemble simulations based on a simple statistical model that allows for bias and that has different variances for observations and the model ensemble. Using generic simplifying geometric properties of high-dimensional spaces we obtain analytical results for the error of the ensemble mean. These results include a closed form for the rank of the ensemble mean among the ensemble members and depend on two quantities: the ensemble variance and the bias both normalized with the variance of observations. The analytical results are used to analyze the GEFS reforecast where the variances and bias depend on lead time. For intermediate lead times between 20 and 100 h the two terms are both around 0.5 and the ensemble mean is only slightly better than individual ensemble members. For lead times larger than 240 h the variance term is close to 1 and the bias term is near 0.5. For these lead times the ensemble mean outperforms almost all individual ensemble members and its relative error comes close to −30%. These results are in excellent agreement with the theory. The simplifying properties of high-dimensional spaces can be applied not only to the ensemble mean but also to, for example, the ensemble spread.


2011 ◽  
Vol 2 (1) ◽  
pp. 161-177 ◽  
Author(s):  
L. A. van den Berge ◽  
F. M. Selten ◽  
W. Wiegerinck ◽  
G. S. Duane

Abstract. In the current multi-model ensemble approach climate model simulations are combined a posteriori. In the method of this study the models in the ensemble exchange information during simulations and learn from historical observations to combine their strengths into a best representation of the observed climate. The method is developed and tested in the context of small chaotic dynamical systems, like the Lorenz 63 system. Imperfect models are created by perturbing the standard parameter values. Three imperfect models are combined into one super-model, through the introduction of connections between the model equations. The connection coefficients are learned from data from the unperturbed model, that is regarded as the truth. The main result of this study is that after learning the super-model is a very good approximation to the truth, much better than each imperfect model separately. These illustrative examples suggest that the super-modeling approach is a promising strategy to improve weather and climate simulations.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Nadia Tebbal ◽  
Zine El Abidine Rahmouni

This research consists of incorporating the crushed sand (CS) in the composition of a concrete and studies the effect of its gradual replacement by the sand dune (SD) on sustainability of high performance concrete (HPC) in aggressive environments. The experimental study shows that the parameters of workability of HPC are improved when the CS is partially replaced by the SD (<2/3). However, a high content of SD (>1/3) additional quantities of water is needed to meet the workability properties. The mechanical strengths decrease by adding the SD to CS, but they reach acceptable values with CS in moderate dosages. The HPC performances are significantly better than the control concrete made up with the same aggregates. The specification tests of durability show that the water absorbing coefficients by capillarity increase after adding SD to the CS.


2021 ◽  
Vol 18 (1) ◽  
pp. 99-121
Author(s):  
DANIEL R. MELAMED

ABSTRACTRecent studies propose that J. S. Bach established ‘parallel proportions’ in his music – ratios of the lengths of movements or of pieces in a collection intended to reflect the perfection of divine creation. Before we assign meaning to the number of bars in a work, we need to understand the mathematical and musical basis of the claim.First we need to decide what a ‘bar’ is and what constitutes a ‘movement’. We have explicit evidence from Bach on these points for Bach's 1733 Dresden Missa, and his own tallies do not agree with those in the theory. There are many ways to count, and the numbers of movements or bars are analytical results dependent on choices by the analyst, not objective data.Next, chance turns out to play an enormous role in ‘parallel proportions’. Under certain constraints almost any set of random numbers that adds up to an even total can be partitioned to show a proportion, with likelihoods better than ninety-five per cent in sets that resemble the Missa. These relationships are properties of numbers, not musical works. We thus need to ask whether any apparent proportion is the result of Bach's design or is simply a statistically inevitable result, and the answer is clearly the latter. For pieces or sets with fewer movements the odds are less overwhelming, but the subjective nature of counting and the possibility of silently choosing from among many possibilities make even these results questionable.Theories about the number of bars in Bach's music and possible meanings are interpretative, not factual, and thus resistant to absolute disproof. But a mathematical result of the kind claimed for ‘parallel proportions’ is essentially assured even for random sets of numbers, and that makes it all but impossible to label such relationships as intentional and meaningful.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Dariusz Puchala

AbstractIn this paper, based on the parametric model of the matrix of discrete cosine transform (DCT), and using an exhaustive search of the parameters’ space, we seek for the best approximations of 8-point DCT at the given computational complexities by taking into account three different scenarios of practical usage. The possible parameter values are selected in such a way that the resulting transforms are only multiplierless approximations, i.e., only additions and bit-shift operations are required. The considered usage scenarios include such cases where approximation of DCT is used: (i) at the data compression stage, (ii) at the decompression stage, and (iii) both at the compression and decompression stages. The obtained results in effectiveness of generated approximations are compared with results of popular known approximations of 8-point DCT of the same class (i.e., multiplierless approximations). In addition, we perform a series of experiments in lossy compression of natural images using popular JPEG standard. The obtained results are presented and discussed. It should be noted, that in the overwhelming number of cases the generated approximations are better than the known ones, e.g., in asymmetric scenarios even by more than 3 dB starting from entropy of 2 bits per pixel. In the last part of the paper, we investigate the possibility of hardware implementation of generated approximations in Field-Programmable Gate Array (FPGA) circuits. The results in the form of resource and energy consumption are presented and commented. The experiment outcomes confirm the assumption that the considered class of transformations is characterized by low resource utilization.


Author(s):  
Wahyu Supriyatin ◽  
Winda Widya Ariestya ◽  
Ida Astuti

Tracking and object is one of the utilizations on the field of the computer vision application. Object tracking utilization as a computer vision in this study is used to identify objects which exist within a frame and calculate the number of objects passing within a frame. The utilization of computer vision in various fields of application can be used to solve the existing problems. The method used in object tracking is by comparison between optical flow estimation method with background method. The test is conducted by using a still camera for both methods by making changes to the parameter values used as a reference. The results of the tests, conducted on the three video objects by comparing the two methods show a Total Recorded Time better than those of the background estimation method, being smaller than 100 seconds. Testing both methods successfully identifies the object tracking and calculates the number of passing cars.


Sign in / Sign up

Export Citation Format

Share Document