Quantitative comparison of the microstructural quality of two classes of commercial soft triboalloys

2008 ◽  
Vol 59 (3) ◽  
pp. 312-320 ◽  
Author(s):  
Rafael Schouwenaars ◽  
Victor H. Jacobo ◽  
Armando Ortiz
2002 ◽  
Vol 24 (2) ◽  
pp. 109-118 ◽  
Author(s):  
S. Srinivasan ◽  
F. Kallel ◽  
R. Souchon ◽  
J. Ophir

Elastography is based on the estimation of strain due to tissue compression or expansion. Conventional elastography involves computing strain as the gradient of the displacement (time-delay) estimates between gated pre- and postcompression signals. Uniform temporal stretching of the postcompression signals has been used to reduce the echo-signal decorrelation noise. However, a uniform stretch of the entire postcompression signal is not optimal in the presence of strain contrast in the tissue and could result in loss of contrast in the elastogram. This has prompted the use of local adaptive stretching techniques. Several adaptive strain estimation techniques using wavelets, local stretching and iterative strain estimation have been proposed. Yet, a quantitative analysis of the improvement in quality of the strain estimates over conventional strain estimation techniques has not been reported. We propose a two-stage adaptive strain estimation technique and perform a quantitative comparison with the conventional strain estimation techniques in elastography. In this technique, initial displacement and strain estimates using global stretching are computed, filtered and then used to locally shift and stretch the postcompression signal. This is followed by a correlation of the shifted and stretched postcompression signal with the precompression signal to estimate the local displacements and hence the local strains. As proof of principle, this adaptive stretching technique was tested using simulated and experimental data.


2013 ◽  
Vol 116 (2) ◽  
pp. 333-340 ◽  
Author(s):  
Elizabeth Klein ◽  
David Altshuler ◽  
Abhirami Hallock ◽  
Nicholas Szerlip

2020 ◽  
Author(s):  
Lincheng Jiang ◽  
Gang Tian ◽  
Bangbing Wang ◽  
Amr Abd El-Raouf

<p>In recent decades, geoelectrical methods have played a very important role in near-surface investigation. The most widely used of these methods is electrical resistivity tomography (ERT). Regardless of the forward and inversion algorithms used, the original data collected from a survey is the most important factor for quality of the resulted model. However, 3D electrical resistivity survey design continues to be based on data sets recorded using one or more of the standard electrode arrays. There is a recognized need for the 3D survey design to get better resolution using fewer data. Choosing suitable data from the comprehensive data set is a great approach. By reasonable selecting, better resolution can be obtained with fewer electrodes and measurements than conventional arrays. Previous research has demonstrated that the optimized survey design using the 'Compare R' method can give a nice performance.</p><p>This paper adds target-oriented selection and modified the original 'Compare R' method. The survey design should be focused on specific target areas, which need a priori information about the subsurface properties. We select electrodes and configurations as the target set by the comprehensive set firstly which meets the requirements of the target area. The number of measurements and electrodes is much less than the comprehensive set and the model resolution matrix takes less time to calculate. At the next step for rank, we calculate the sensitivity matrix of the target set only once and then calculate the contribution degree of each measurement separately from it. The time of iterative calculation of the resolution matrix when measurements set changing is less than the original method.</p><p>The traditional method of evaluating RMS is not appropriate for comparing the quality of collected data by different survey designs. SSIM (structural similarity index) gives more reliable measures of image similarity better than the RMS. The curves of SSIM values in three dimensions and the average SSIM are given as quantitative comparisons. Besides, the frequency of electrodes utilized given to guides on selecting the highest used electrodes. Finally, the curves of the average relative resolution S and the number of electrodes as the number of measurements increase are given, which proves the method works effectively.</p><p>The results show the significance of using target-oriented optimized survey design, as it selects fewer electrodes and arrays than the original CR method. Also, it produces better resolution than conventional arrays and takes less calculation time. 3D SSIM, frequency of electrodes used, the relationship between average relative resolution, number of electrodes and number of measurements, these quantitative comparison methods can effectively evaluate the data collected in various survey designs.</p>


Author(s):  
Divyangkumar D. Patel ◽  
Devdas I. Lalwani

In the 2.5D pocket machining, the pocket geometry (shape of the pocket) significantly affects the efficiency of spiral tool path in terms of tool path length, cutting time, surface roughness, cutting forces, etc. Hence, the pocket geometry is an important factor that needs to be considered. However, quantitative methods to compare different pocket geometries are scarcely available. In this paper, we have introduced a novel approach for quantitative comparison of different pocket geometries using a dimensionless number, “DN.” The concept and formula of DN are developed, and DN is calculated for various pocket geometries. A concept of percentage utilization of tool (PUT) is also introduced and is considered as a measure and an indicator for a good tool path. The guidelines for comparing pocket geometries based on DN and PUT are reported. The results show that DN can be used to predict the quality of tool path prior to tool path generation. Further, an algorithm to decompose pocket geometry into subgeometries is developed that improves the efficiency of spiral tool path for bottleneck pockets (or multiple-connected pocket). This algorithm uses another dimensionless number “HARIN” (HARI is the acronym of “helps in appropriate rive-lines identification” and suffix “N” stands for number) to compare parent pocket geometry with subgeometries. The results indicate that decomposing pocket geometry with the new algorithm improves HARIN and removes the effect of bottlenecks. Furthermore, the algorithm for decomposition is extended for pockets that are bounded by free-form curves.


2006 ◽  
Vol 51 (20) ◽  
pp. 5363-5375 ◽  
Author(s):  
J Menhel ◽  
D Levin ◽  
D Alezra ◽  
Z Symon ◽  
R Pfeffer

1997 ◽  
Vol 3 (4) ◽  
pp. 299-310 ◽  
Author(s):  
Geoffrey H. Campbell ◽  
Dov Cohen ◽  
Wayne E. King

Abstract: A method is described to prepare a high-resolution electron micrograph for quantitative comparison with a simulated high-resolution image. The experimental data are converted from the darkening of film used to acquire the image to units of electrons per incident electron, the same units used in the simulation. Also, distortions in the image arising from distortions in the image-forming lenses of the electron microscope are removed to improve the quality of the data. Finally, an alignment procedure is described which gives precise, pixel-by-pixel alignment of the experimental image with the simulated image. Examples of the procedure are shown to illustrate how actual data are prepared for quantitative analysis.


Author(s):  
Hiroyuki Konda ◽  
Hideki Nakamura

This study estimated composite headway distributions consisting of follower and non-follower headway elements and used the follower percentage obtained as the estimated parameters of those distributions to evaluate the quality of service (QOS) of traffic flow on Japanese intercity expressways under uncongested conditions. Analysis of pulse data obtained by vehicle detectors at multiple points with differing geometric structures showed that follower percentage is influenced by lane traffic volume, vehicle pair, and lane operation. Use of follower percentage also enabled clear and quantitative comparison and evaluation of the QOS of traffic flow for different lane operation formats, which could not be adequately expressed by such conventional macroscopic indices as average speed and traffic density. This indicates that follower percentage is a suitable performance measure for evaluating the QOS of traffic flow.


2020 ◽  
Vol 12 (9) ◽  
pp. 145
Author(s):  
Franklin Tchakounté ◽  
Athanase Esdras Yera Pagor ◽  
Jean Claude Kamgang ◽  
Marcellin Atemkeng

To keep its business reliable, Google is concerned to ensure the quality of apps on the store. One crucial aspect concerning quality is security. Security is achieved through Google Play protect and anti-malware solutions. However, they are not totally efficient since they rely on application features and application execution threads. Google provides additional elements to enable consumers to collectively evaluate applications providing their experiences via reviews or showing their satisfaction through rating. The latter is more informal and hides details of rating whereas the former is textually expressive but requires further processing to understand opinions behind it. Literature lacks approaches which mine reviews through sentiment analysis to extract useful information to improve the security aspects of provided applications. This work goes in this direction and in a fine-grained way, investigates in terms of confidentiality, integrity, availability, and authentication (CIAA). While assuming that reviews are reliable and not fake, the proposed approach determines review polarities based on CIAA-related keywords. We rely on the popular classifier Naive Bayes to classify reviews into positive, negative, and neutral sentiment. We then provide an aggregation model to fusion different polarities to obtain application global and CIAA reputations. Quantitative experiments have been conducted on 13 applications including e-banking, live messaging and anti-malware apps with a total of 1050 security-related reviews and 7,835,322 functionality-related reviews. Results show that 23% of applications (03 apps) have a reputation greater than 0.5 with an accent on integrity, authentication, and availability, while the remaining 77% has a polarity under 0.5. Developers should make a lot of effort in security while developing codes and that more efforts should be made to improve confidentiality reputation. Results also show that applications with good functionality-related reputation generally offer a bad security-related reputation. This situation means that even if the number of security reviews is low, it does not mean that the security aspect is not a consumer preoccupation. Unlike, developers put much more time to test whether applications work without errors even if they include possible security vulnerabilities. A quantitative comparison against well-known rating systems reveals the effectiveness and robustness of CIAA-RepDroid to repute apps in terms of security. CIAA-RepDroid can be associated with existing rating solutions to recommend developers exact CIAA aspects to improve within source codes.


Author(s):  
Ghadeer Al-Bdour ◽  
Raffi Al-Qurran ◽  
Mahmoud Al-Ayyoub ◽  
Ali Shatnawi

Deep Learning (DL) is one of the hottest fields. To foster the growth of DL, several open source frameworks appeared providing implementations of the most common DL algorithms. These frameworks vary in the algorithms they support and in the quality of their implementations. The purpose of this work is to provide a qualitative and quantitative comparison among three such frameworks: TensorFlow, Theano and CNTK. To ensure that our study is as comprehensive as possible, we consider multiple benchmark datasets from different fields (image processing, NLP, etc.) and measure the performance of the frameworks' implementations of different DL algorithms. For most of our experiments, we find out that CNTK's implementations are superior to the other ones under consideration.


Sign in / Sign up

Export Citation Format

Share Document