effective layer
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 16)

H-INDEX

9
(FIVE YEARS 2)

Author(s):  
V. Bratishko ◽  

Purpose of the study. Improving the efficiency of ultrasonic disintegration of plant raw materials based on the search for rational (in terms of ensuring the required level of disintegration of plant raw materials) parameters of the cavitation chamber, properties of the suspension, and processing modes. Research methods. Methods of analysis and generalization of the results of scientific research of the processes of ultrasonic treatment of liquids and suspensions were used to substantiate the rational design and technological scheme and parameters of equipment for ultrasonic disintegration of plant raw materials. The results of the study. Acoustic piezoelectric cavitators with a low (up to 5.0 W/cm2) intensity of ultrasound, which is introduced into the liquid or suspension through the bottom (walls) of an open non-resonant cavitation chamber, are the most suitable for cavitation treatment of aqueous suspensions of plant bioresources. Ultrasonic emitters in this case are rigidly attached to the outside of the bottom (walls) of the cavitation chamber. Based on the analysis of research results, the design of a device for ultrasonic treatment of suspensions of plant raw materials was proposed. Conclusions. It is established that the main parameters determining the efficiency of the process of ultrasonic disintegration of plant raw material suspensions are the presence of an effective layer and rational intensity of ultrasonic influence on the medium in the cavitation chamber, which depends on the physical and mechanical properties of the treated medium. Based on the analysis, a structural and technological scheme of the device for ultrasonic treatment of suspensions is proposed.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256187
Author(s):  
Junghun Kim ◽  
Jinhong Jung ◽  
U. Kang

Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems, which have limited computing resources. However, previous works for compressing deep GCNs do not consider the multi-hop aggregation of the deep GCNs, though it is the main purpose for their multiple GCN layers. In this work, we propose MustaD (Multi-staged knowledge Distillation), a novel approach for compressing deep GCNs to single-layered GCNs through multi-staged knowledge distillation (KD). MustaD distills the knowledge of 1) the aggregation from multiple GCN layers as well as 2) task prediction while preserving the multi-hop feature aggregation of deep GCNs by a single effective layer. Extensive experiments on four real-world datasets show that MustaD provides the state-of-the-art performance compared to other KD based methods. Specifically, MustaD presents up to 4.21%p improvement of accuracy compared to the second-best KD models.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4212
Author(s):  
Priscila Morais Argôlo Bonfim Estrela ◽  
Robson de Oliveira Albuquerque ◽  
Dino Macedo Amaral ◽  
William Ferreira Giozza ◽  
Rafael Timóteo de Sousa Júnior

As smart devices have become commonly used to access internet banking applications, these devices constitute appealing targets for fraudsters. Impersonation attacks are an essential concern for internet banking providers. Therefore, user authentication countermeasures based on biometrics, whether physiological or behavioral, have been developed, including those based on touch dynamics biometrics. These measures take into account the unique behavior of a person when interacting with touchscreen devices, thus hindering identitification fraud because it is hard to impersonate natural user behaviors. Behavioral biometric measures also balance security and usability because they are important for human interfaces, thus requiring a measurement process that may be transparent to the user. This paper proposes an improvement to Biotouch, a supervised Machine Learning-based framework for continuous user authentication. The contributions of the proposal comprise the utilization of multiple scopes to create more resilient reasoning models and their respective datasets for the improved Biotouch framework. Another contribution highlighted is the testing of these models to evaluate the imposter False Acceptance Error (FAR). This proposal also improves the flow of data and computation within the improved framework. An evaluation of the multiple scope model proposed provides results between 90.68% and 97.05% for the harmonic mean between recall and precision (F1 Score). The percentages of unduly authenticated imposters and errors of legitimate user rejection (Equal Error Rate (EER)) are between 9.85% and 1.88% for static verification, login, user dynamics, and post-login. These results indicate the feasibility of the continuous multiple-scope authentication framework proposed as an effective layer of security for banking applications, eventually operating jointly with conventional measures such as password-based authentication.


Author(s):  
Jonathan M. Garner ◽  
William C. Iwasko ◽  
Tyler D. Jewel ◽  
Richard L. Thompson ◽  
Bryan T. Smith

AbstractA dataset maintained by the Storm Prediction Center (SPC) of 6300 tornado events from 2009–2015, consisting of radar-identified convective modes and near-storm environmental information obtained from Rapid Update Cycle and Rapid Refresh model analysis grids, has been augmented with additional radar information related to the low-level mesocyclones associated with tornado longevity, path-length, and width. All EF2–EF5 tornadoes, in addition to randomly selected EF0–EF1 tornadoes, were extracted from the SPC dataset, which yielded 1268 events for inclusion in the current study. Analysis of that data revealed similar values of the effective-layer significant tornado parameter for the longest-lived (60+ min) tornadic circulations, longest-tracked (≥ 68 km) tornadoes, and widest tornadoes (≥ 1.2 km). However, the widest tornadoes occurring west of –94° longitude were associated with larger mean-layer convective available potential energy, storm-top divergence, and low-level rotational velocity. Furthermore, wide tornadoes occurred when low-level winds were out of the southeast resulting in large low-level hodograph curvature and near-surface horizontal vorticity that was more purely streamwise compared to long-lived and long-tracked events. On the other hand, tornado path-length and longevity were maximized with eastward migrating synoptic-scale cyclones associated with strong southwesterly wind profiles through much of the troposphere, fast storm motions, large values of bulk wind difference and storm-relative helicity, and lower buoyancy.


CANTILEVER ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 109-114
Author(s):  
Yulindasari Sutejo ◽  
Sutanto Muliawan ◽  
Ratna Dewi ◽  
Febrian Hadinata ◽  
Budi Ariawan ◽  
...  

Some of the unfavorable characteristics of peat soil are low bearing capacity and high compressibility. The reinforcing material used in this research which functions the same as geogrid are bamboo materials (grids and woven). The bamboo material used aims to determine the carrying capacity and reduction of shallow foundations on peat soil before being reinforced and after being reinforced. Bamboo matting and rectangular patterned bamboo grids. Laboratory scale testing were used as the research methodology. The peat soil sampling location came from the area of Dusun III Banyu Urip, Banyuasin regency, South Sumatra province. The bamboo material is obtained from the Seberang Ulu area, Palembang City and sand  is obtained from the sand depot in the Musi II area, Palembang City. The results of laboratory testing show that the addition of the number of reinforcement layers and the effective layer depth distance will give a greater bearing capacity ratio (BCR) value. The bearing capacity value of the shallow foundation before being reinforced on peat soil using Terzaghi's analysis is 45.232 kpa. Then, after testing the variation in the depth of the layer d = 0b; d = 0.25b; and d = 0.5b with the number of layers 1 layer, 2 layers and 3 layers obtained a variation which gives the highest bearing capacity value of layer depth variations d = 0.25b with the number of layers of 3 layers. The bearing capacity value is 94 kpa with a BCRvalue of 2.08 (percent increase of 107.96 %).


2020 ◽  
Vol 4 (2) ◽  
pp. 22
Author(s):  
Aldo Castillo ◽  
Cesar Molina ◽  
Edinson Reyes ◽  
Hans Portilla ◽  
César Arévalo ◽  
...  

The present research evaluated the effect of the nitriding time in plasma in the range of 5 to 15 hours, on the hardness profile of the cross section of stainless steel samples AISI 431; in addition to taking and differentiating the data on surface hardness, effective layer depth and nitride layer thickness. The nitriding process was by plasma, the process temperature was kept constant at 400 °C. The evaluated samples were machined (rolled and countersigned), and were left in one inch diameter and one inch in length. The times of 10 and 15 hours of nitriding time were obtained by accumulating time of 05 hours of nitriding per week; the hardness profiles were obtained by using the LECO model LMV-50V micro durometer; The ASTM E3-91 standard was used to collect the aforementioned hardness data, from these it was possible to determine that the maximum surface hardnesses are (1053, 1252 and 1327) HV-0.01, for nitriding times of (5,10 and 15) hours respectively, the average effective layer thicknesses were (37.75, 33 and 28.75) μm; while the nitride layer thicknesses were (4.9, 7.03 and 10.7) μm corresponding to times of (5, 10 and 15) hours respectively. The hardness in the core after the nitriding treatment was kept in the range of (275-277) HV-0.01. These values were determined by microscopic evaluation of the tested samples, the metallography reagent used was 3% Nital by electrolytic attack for 3 minutes in each case. The statistical analysis corresponded to Student's “t” tests, in the form of pairwise comparison, from which the non-significant difference between repetitions and the significant difference between the different levels of study were determined.


2020 ◽  
Author(s):  
Ramesh Masthi ◽  
Afraz Jahan ◽  
Divya Bharathi ◽  
Pradam Abhilash ◽  
Vinayak Kaniyarakkal ◽  
...  

BACKGROUND The SARS-Cov-2 infection has rapidly saturated health systems and traditional surveillance networks are finding hard to keep pace with its spread. We designed a participatory disease surveillance (PDS) system, to capture symptoms of Influenza-like illness (ILI) to estimate SARS-CoV-2 infection in the community. OBJECTIVE While data generated by these platforms can help public health organisations find community hotspots and effectively direct control measures, it has never been compared to traditional systems. METHODS A completely anonymised web based PDS system, www.trackcovid-19.org was developed. We evaluated the symptomatic responses received form the PDS system to the traditional risk based surveillance carried out by the Bruhat Bengaluru Mahanagara Palike over a period of 45 days in the South Indian city of Bengaluru RESULTS The PDS system recorded 11062 entries from 106 Postal codes. A healthy response was obtained from 10863 users while 199 (1.8%) reported symptomatic. Subgroup analysis of a 14 day symptomatic window recorded 33 (0.29%) responses. Risk based surveillance was carried out covering a population of 605,284 with 209 (0.03%) individuals identified symptomatic. CONCLUSIONS Web PDS platforms provide better visualisation of community infection when compared to traditional risk based surveillance systems. They are extremely useful by providing real time information in the extended battle against this pandemic. When integrated into national disease surveillance systems, they can provide long term community surveillance adding an important cost-effective layer to already available data sources.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 274
Author(s):  
Jieying Wang ◽  
Maarten Terpstra ◽  
Jiří Kosinka ◽  
Alexandru Telea

Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality and size of the encoded image, and how these metrics depend on selected encoding parameters, has not been formally evaluated. In this paper, we fill this gap with two main contributions. First, we improve the encoding power of dense skeletons by effective layer selection heuristics, a refined skeleton pixel-chain encoding, and a postprocessing compression scheme. Secondly, we propose a benchmark to assess the encoding power of dense skeletons for a wide set of natural and synthetic color and grayscale images. We use this benchmark to derive optimal parameters for dense skeletons. Our method, called Compressing Dense Medial Descriptors (CDMD), achieves higher-compression ratios at similar quality to the well-known JPEG technique and, thereby, shows that skeletons can be an interesting option for lossy image encoding.


Sign in / Sign up

Export Citation Format

Share Document