traditional approach
Recently Published Documents


TOTAL DOCUMENTS

2753
(FIVE YEARS 1177)

H-INDEX

46
(FIVE YEARS 9)

Author(s):  
Sławomir K. Zieliński ◽  
Paweł Antoniuk ◽  
Hyunkook Lee ◽  
Dale Johnson

AbstractOne of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this work was to develop a method for discriminating between front and back located ensembles in binaural recordings of music. To this end, 22, 496 binaural excerpts, representing either front or back located ensembles, were synthesized by convolving multi-track music recordings with 74 sets of head-related transfer functions (HRTF). The discrimination method was developed based on the traditional approach, involving hand-engineering of features, as well as using a deep learning technique incorporating the convolutional neural network (CNN). According to the results obtained under HRTF-dependent test conditions, CNN showed a very high discrimination accuracy (99.4%), slightly outperforming the traditional method. However, under the HRTF-independent test scenario, CNN performed worse than the traditional algorithm, highlighting the importance of testing the algorithms under HRTF-independent conditions and indicating that the traditional method might be more generalizable than CNN. A minimum of 20 HRTFs are required to achieve a satisfactory generalization performance for the traditional algorithm and 30 HRTFs for CNN. The minimum duration of audio excerpts required by both the traditional and CNN-based methods was assessed as 3 s. Feature importance analysis, based on a gradient attribution mapping technique, revealed that for both the traditional and the deep learning methods, a frequency band between 5 and 6 kHz is particularly important in terms of the discrimination between front and back ensemble locations. Linear-frequency cepstral coefficients, interaural level differences, and audio bandwidth were identified as the key descriptors facilitating the discrimination process using the traditional approach.


SIASAT ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 71-81
Author(s):  
Muhammad Faisal Hamdani

This study focuses on examining the interpretation of the verse on religious moderation, related to the term wastahan ummatan in QS.Al Baqarah (2): 143. This research is important in knowing the views of commentators on the Qur'anic verse regarding religious moderation and related to the contemporary context. This research uses a qualitative traditional approach, using a literature review through a systematic review method of references (Systematic Library Research) related to the focus of this study. The data analysis technique used in this study is descriptive, interpretive and deductive analysis which is carried out in stagesfformulation, search, inspection/selection, analysis-synthesis, quality control mainstreaming, and report preparation.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Samaneh Sadat Nickayin ◽  
Rosa Coluzzi ◽  
Alvaro Marucci ◽  
Leonardo Bianchini ◽  
Luca Salvati ◽  
...  

AbstractSouthern Europe is a hotspot for desertification risk because of the intimate impact of soil deterioration, landscape transformations, rising human pressure, and climate change. In this context, large-scale empirical analyses linking landscape fragmentation with desertification risk assume that increasing levels of land vulnerability to degradation are associated with significant changes in landscape structure. Using a traditional approach of landscape ecology, this study evaluates the spatial structure of a simulated landscape based on different levels of vulnerability to land degradation using 15 metrics calculated at three time points (early-1960s, early-1990s, early-2010s) in Italy. While the (average) level of land vulnerability increased over time almost in all Italian regions, vulnerable landscapes demonstrated to be increasingly fragmented, as far as the number of homogeneous patches and mean patch size are concerned. The spatial balance in affected and unaffected areas—typically observed in the 1960s—was progressively replaced with an intrinsically disordered landscape, and this process was more intense in regions exposed to higher (and increasing) levels of land degradation. The spread of larger land patches exposed to intrinsic degradation brings to important consequences since (1) the rising number of hotspots may increase the probability of local-scale degradation processes, and (2) the buffering effect of neighbouring (unaffected) land can be less effective on bigger hotspots, promoting a downward spiral toward desertification.


2022 ◽  
pp. 096366252110572
Author(s):  
Michelle L. Edwards ◽  
Caden Ziegler

This study examines science communication within Ask Me Anything sessions hosted by US National Oceanic and Atmospheric Administration scientists on Reddit. In addition to considering a unique social media platform, our work makes an important contribution in revealing the limitations of a traditional approach to studying science communication and modeling an alternative. First, using an “assembled” approach, we qualitatively explore themes in National Oceanic and Atmospheric Administration scientists’ posts and consider how they reflect the goals of “deficit” and “dialogue” models. Second, using a “disassembling” approach, inspired by Davies and Horst and actor-network theory, we more deeply examine our experiences studying the Ask Me Anything sessions. We then demonstrate how this alternative approach identifies “hidden” human and non-human actants that may have shaped science communication as “mediators.” We use these insights to reject the common assumption that science communication on social media occurs solely and directly between scientists and publics.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 223
Author(s):  
Zihao Wang ◽  
Sen Yang ◽  
Mengji Shi ◽  
Kaiyu Qin

In this study, a multi-level scale stabilizer intended for visual odometry (MLSS-VO) combined with a self-supervised feature matching method is proposed to address the scale uncertainty and scale drift encountered in the field of monocular visual odometry. Firstly, the architecture of an instance-level recognition model is adopted to propose a feature matching model based on a Siamese neural network. Combined with the traditional approach to feature point extraction, the feature baselines on different levels are extracted, and then treated as a reference for estimating the motion scale of the camera. On this basis, the size of the target in the tracking task is taken as the top-level feature baseline, while the motion matrix parameters as obtained by the original visual odometry of the feature point method are used to solve the real motion scale of the current frame. The multi-level feature baselines are solved to update the motion scale while reducing the scale drift. Finally, the spatial target localization algorithm and the MLSS-VO are applied to propose a framework intended for the tracking of target on the mobile platform. According to the experimental results, the root mean square error (RMSE) of localization is less than 3.87 cm, and the RMSE of target tracking is less than 4.97 cm, which demonstrates that the MLSS-VO method based on the target tracking scene is effective in resolving scale uncertainty and restricting scale drift, so as to ensure the spatial positioning and tracking of the target.


2022 ◽  
Author(s):  
Khalid Fahad Almulhem ◽  
Ataur Malik ◽  
Mustafa Ghazwi

Abstract Acid Fracturing has been one of the most effective stimulation technique applied in the carbonate formations to enhance oil and gas production. The traditional approach to stimulate the carbonate reservoir has been to pump crosslinked gel and acid blends such as plain 28% HCL, emulsified acid (EA) and in-situ gelled acid at fracture rates in order to maximize stimulated reservoir volume with desired conductivity. With the common challenges encountered in fracturing carbonate formations, including high leak-off and fast acid reaction rates, the conventional practice of acid fracturing involves complex pumping schemes of pad, acid and viscous diverter fluid cycles to achieve fracture length and conductivity targets. A new generation of Acid-Based Crosslinked (ABC) fluid system has been deployed to stimulate high temperature carbonate formations in three separate field trials aiming to provide rock-breaking viscosity, acid retardation and effective leak-off control. The ABC fluid system has been progressively introduced, initially starting as diverter / leak off control cycles of pad and acid stages. Later it was used as main acid-based fluid system for enhancing live acid penetration, diverting and reducing leakoff as well as keeping the rock open during hydraulic fracturing operation. Unlike in-situ crosslinked acid based system that uses acid reaction by products to start crosslinking process, the ABC fluid system uses a unique crosslinker/breaker combination independent of acid reaction. The system is prepared with 20% hydrochloric acid and an acrylamide polymer along with zirconium metal for delayed crosslinking in unspent acid. The ABC fluid system is aimed to reduced three fluid requirements to one by eliminating the need for an intricate pumping schedule that otherwise would include: a non-acid fracturing pad stage to breakdown the formation and generate the targeted fracture geometry; a retarded emulsified acid system to achieve deep penetrating, differently etched fractures, and a self-diverting agent to minimize fluid leak-off. This paper describes all efforts behind the introduction of this novel Acid-Based Crossliked fluid system in different field trials. Details of the fluid design optimization are included to illustrate how a single system can replace the need for multiple fluids. The ABC fluid was formulated to meet challenging bottom-hole formation conditions that resulted in encouraging post treatment well performance.


Terminology ◽  
2022 ◽  
Author(s):  
Ayla Rigouts Terryn ◽  
Véronique Hoste ◽  
Els Lefever

Abstract As with many tasks in natural language processing, automatic term extraction (ATE) is increasingly approached as a machine learning problem. So far, most machine learning approaches to ATE broadly follow the traditional hybrid methodology, by first extracting a list of unique candidate terms, and classifying these candidates based on the predicted probability that they are valid terms. However, with the rise of neural networks and word embeddings, the next development in ATE might be towards sequential approaches, i.e., classifying each occurrence of each token within its original context. To test the validity of such approaches for ATE, two sequential methodologies were developed, evaluated, and compared: one feature-based conditional random fields classifier and one embedding-based recurrent neural network. An additional comparison was added with a machine learning interpretation of the traditional approach. All systems were trained and evaluated on identical data in multiple languages and domains to identify their respective strengths and weaknesses. The sequential methodologies were proven to be valid approaches to ATE, and the neural network even outperformed the more traditional approach. Interestingly, a combination of multiple approaches can outperform all of them separately, showing new ways to push the state-of-the-art in ATE.


Robotics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 13
Author(s):  
Neda Hassanzadeh ◽  
Alba Perez-Gracia

Mixed-position kinematic synthesis is used to not only reach a certain number of precision positions, but also impose certain instantaneous motion conditions at those positions. In the traditional approach, one end-effector twist is defined at each precision position in order to achieve better guidance of the end-effector along a desired trajectory. For one-degree-of-freedom linkages, that suffices to fully specify the trajectory locally. However, for systems with a higher number of degrees of freedom, such as robotic systems, it is possible to specify a complete higher-dimensional subspace of potential twists at particular positions. In this work, we focus on the 3R serial chain. We study the three-dimensional subspaces of twists that can be defined and set the mixed-position equations to synthesize the chain. The number and type of twist systems that a chain can generate depend on the topology of the chain; we find that the spatial 3R chain can generate seven different fully defined twist systems. Finally, examples of synthesis with several fully defined and partially defined twist spaces are presented. We show that it is possible to synthesize 3R chains for feasible subspaces of different types. This allows a complete definition of potential motions at particular positions, which could be used for the design of precise interaction with contact surfaces.


Author(s):  
Tianhao Yan ◽  
Mugurel Turos ◽  
Chelsea Bennett ◽  
John Garrity ◽  
Mihai Marasteanu

High field density helps in increasing the durability of asphalt pavements. In a current research effort, the University of Minnesota and the Minnesota Department of Transportation (MnDOT) have been working on designing asphalt mixtures with higher field densities. One critical issue is the determination of the Ndesign values for these mixtures. The physical meaning of Ndesign is discussed first. Instead of the traditional approach, in which Ndesign represents a measure of rutting resistance, Ndesign is interpreted as an indication of the compactability of mixtures. The field density data from some recent Minnesota pavement projects are analyzed. A clear negative correlation between Ndesign and field density level is identified, which confirms the significant effect of Ndesign on the compactability and consequently on the field density of mixtures. To achieve consistency between the laboratory and field compaction, it is proposed that Ndesign should be determined to reflect the real field compaction effort. A parameter called the equivalent number of gyrations to field compaction effort (Nequ) is proposed to quantify the field compaction effort, and the Nequ values for some recent Minnesota pavement projects are calculated. The results indicate that the field compaction effort for the current Minnesota projects evaluated corresponds to about 30 gyrations of gyratory compaction. The computed Nequ is then used as the Ndesign for a Superpave 5 mixture placed in a paving project, for which field density data and laboratory performance test results are obtained. The data analysis shows that both the field density and pavement performance of the Superpave 5 mixture are significantly improved compared with the traditional mixtures. The results indicate that Nequ provides a reasonable estimation of field compaction effort, and that Nequ can be used as the Ndesign for achieving higher field densities.


Author(s):  
Xuan Song ◽  
Hai Yun Gao ◽  
Karl Herrup ◽  
Ronald P. Hart

Gene expression studies using xenograft transplants or co-culture systems, usually with mixed human and mouse cells, have proven to be valuable to uncover cellular dynamics during development or in disease models. However, the mRNA sequence similarities among species presents a challenge for accurate transcript quantification. To identify optimal strategies for analyzing mixed-species RNA sequencing data, we evaluate both alignment-dependent and alignment-independent methods. Alignment of reads to a pooled reference index is effective, particularly if optimal alignments are used to classify sequencing reads by species, which are re-aligned with individual genomes, generating [Formula: see text] accuracy across a range of species ratios. Alignment-independent methods, such as convolutional neural networks, which extract the conserved patterns of sequences from two species, classify RNA sequencing reads with over 85% accuracy. Importantly, both methods perform well with different ratios of human and mouse reads. While non-alignment strategies successfully partitioned reads by species, a more traditional approach of mixed-genome alignment followed by optimized separation of reads proved to be the more successful with lower error rates.


Sign in / Sign up

Export Citation Format

Share Document