practical challenge
Recently Published Documents


TOTAL DOCUMENTS

117
(FIVE YEARS 49)

H-INDEX

12
(FIVE YEARS 3)

2022 ◽  
Vol 17 ◽  
pp. 16-24
Author(s):  
Lalit Mohan Satapathy ◽  
Pranati Das

In the world of digital image processing, image denoising plays a vital role, where the primary objective was to distinguish between a clean and a noisy image. However, it was not a simple task. As a consequence of everyone's understanding of the practical challenge, a variety of methods have been presented during the last few years. Of those, wavelet transformer-based approaches were the most common. But wavelet-based methods have their own limitations in image processing applications like shift sensitivity, poor directionality, and lack of phase information, and they also face difficulties in defining the threshold parameters. As a result, this study provides an image de-noising approach based on Bi-dimensional Empirical Mode Decomposition (BEMD). This project's main purpose is to disintegrate noisy images based on their frequency and construct a hybrid algorithm that uses existing de-noising techniques. This approach decomposes the noisy picture into numerous IMFs with residue, which were subsequently filtered independently based on their specific properties. To quantify the success of the proposed technique, a comprehensive analysis of the experimental results of the benchmark test images was conducted using several performance measurement matrices. The reconstructed image was found to be more accurate and pleasant to the eye, outperforming state-of-the-art denoising approaches in terms of PSNR, MSE, and SSIM.


2022 ◽  
Author(s):  
Brian L Zhong ◽  
Vipul T Vachharajani ◽  
Alexander R Dunn

Numerous proteins experience and respond to mechanical forces as an integral part of their cellular functions, but measuring these forces remains a practical challenge. Here, we present a compact, 11 kDa molecular tension sensor termed STReTCh (Sensing Tension by Reactive Tag Characterization). Unlike existing genetically encoded tension sensors, STReTCh does not rely on experimentally demanding Förster resonance energy transfer (FRET)-based measurements and is compatible with typical fix-and-stain protocols. Using a magnetic tweezers assay, we calibrate the STReTCh module and show that it responds to physiologically relevant, piconewton forces. As proof-of-concept, we use an extracellular STReTCh-based sensor to visualize cell-generated forces at integrin-based adhesion complexes. In addition, we incorporate STReTCh into vinculin, a cytoskeletal adaptor protein, and show that STReTCh reports on forces transmitted between the cytoskeleton and cellular adhesion complexes. These data illustrate the utility of STReTCh as a broadly applicable tool for the measurement molecular-scale forces in biological systems.


2022 ◽  
Vol 33 (1) ◽  
pp. 89-106
Author(s):  
Ann Weatherall ◽  
Emma Tennent ◽  
Fiona Grattan

Societies are undergoing enormous upheavals in the wake of the COVID-19 pandemic. High levels of psychological distress are widespread, yet little is known about the exact impacts at the micro-level of everyday life. The present study examines the ordinary activity of buying bread to understand changes occurring early in the crisis. A dataset of over 50 social interactions at a community market stall were video-recorded, transcribed and examined in detail using multi-modal conversation analysis. With COVID-19 came an orientation to a heightened risk of disease transmission when selling food. The bread was placed in bags, a difference which was justified as a preventative measure and morally normalised by invoking a common-sense prohibition of touching produce. Having the bread out of immediate sight was a practical challenge that occasioned the expansion of turns and sequences to look for and/or confirm what was for sale, highlighting a normative organisation between seeing and buying. The analysis shows how a preventative measure related to the pandemic was adjusted to interactionally. More broadly, this research reveals the small changes to daily life that likely contribute to the overall negative impacts on health and well-being that have been reported.


Author(s):  
Armin Tavakoli ◽  
Alejandro Pozas-Kerstjens ◽  
mingxing luo ◽  
Marc-Olivier Renou

Abstract Bell’s theorem proves that quantum theory is inconsistent with local physical models. It has propelled research in the foundations of quantum theory and quantum information science. As a fundamental feature of quantum theory, it impacts predictions far beyond the traditional scenario of the Einstein-Podolsky-Rosen paradox. In the last decade, the investigation of nonlocality has moved beyond Bell’s theorem to consider more sophisticated experiments that involve several independent sources that distribute shares of physical systems among many parties in a network. Network scenarios, and the nonlocal correlations that they give rise to, lead to phenomena that have no counterpart in traditional Bell experiments, thus presenting a formidable conceptual and practical challenge. This review discusses the main concepts, methods, results and future challenges in the emerging topic of Bell nonlocality in networks.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Ryo Torii ◽  
Magdi H Yacoub

Computations of fractional flow reserve, based on CT coronary angiography and computational fluid dynamics (CT-based FFR) to assess the severity of coronary artery stenosis, was introduced around a decade ago and is now one of the most successful applications of computational fluid dynamic modelling in clinical practice. Although the mathematical modelling framework behind this approach and the clinical operational model vary, its clinical efficacy has been demonstrated well in general. In this review, technical elements behind CT-based FFR computation are summarised with some key assumptions and challenges. Examples of these challenges include the complexity of the model (such as blood viscosity and vessel wall compliance modelling), whose impact has been debated in the research. Efforts made to address the practical challenge of processing time are also reviewed. Then, further application areas – myocardial bridge, renal stenosis and lower limb stenosis – are discussed along with specific challenges expected in these areas.


2021 ◽  
Vol 13 (20) ◽  
pp. 4133
Author(s):  
Jakub Nalepa ◽  
Michal Myller ◽  
Lukasz Tulczyjew ◽  
Michal Kawulok

Hyperspectral images capture very detailed information about scanned objects and, hence, can be used to uncover various characteristics of the materials present in the analyzed scene. However, such image data are difficult to transfer due to their large volume, and generating new ground-truth datasets that could be utilized to train supervised learners is costly, time-consuming, very user-dependent, and often infeasible in practice. The research efforts have been focusing on developing algorithms for hyperspectral data classification and unmixing, which are two main tasks in the analysis chain of such imagery. Although in both of them, the deep learning techniques have bloomed as an extremely effective tool, designing the deep models that generalize well over the unseen data is a serious practical challenge in emerging applications. In this paper, we introduce the deep ensembles benefiting from different architectural advances of convolutional base models and suggest a new approach towards aggregating the outputs of base learners using a supervised fuser. Furthermore, we propose a model augmentation technique that allows us to synthesize new deep networks based on the original one by injecting Gaussian noise into the model’s weights. The experiments, performed for both hyperspectral data classification and unmixing, show that our deep ensembles outperform base spectral and spectral-spatial deep models and classical ensembles employing voting and averaging as a fusing scheme in both hyperspectral image analysis tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Youngchun Kwon ◽  
Seokho Kang ◽  
Youn-Suk Choi ◽  
Inkoo Kim

AbstractEvolutionary design has gained significant attention as a useful tool to accelerate the design process by automatically modifying molecular structures to obtain molecules with the target properties. However, its methodology presents a practical challenge—devising a way in which to rapidly evolve molecules while maintaining their chemical validity. In this study, we address this limitation by developing an evolutionary design method. The method employs deep learning models to extract the inherent knowledge from a database of materials and is used to effectively guide the evolutionary design. In the proposed method, the Morgan fingerprint vectors of seed molecules are evolved using the techniques of mutation and crossover within the genetic algorithm. Then, a recurrent neural network is used to reconstruct the final fingerprints into actual molecular structures while maintaining their chemical validity. The use of deep neural network models to predict the properties of these molecules enabled more versatile and efficient molecular evaluations to be conducted by using the proposed method repeatedly. Four design tasks were performed to modify the light-absorbing wavelengths of organic molecules from the PubChem library.


Author(s):  
Ying-Peng Tang ◽  
Sheng-Jun Huang

To learn an effective model with less training examples, existing active learning methods typically assume that there is a given target model, and try to fit it by selecting the most informative examples. However, it is less likely to determine the best target model in prior, and thus may get suboptimal performance even if the data is perfectly selected. To tackle with this practical challenge, this paper proposes a novel framework of dual active learning (DUAL) to simultaneously perform model search and data selection. Specifically, an effective method with truncated importance sampling is proposed for Combined Algorithm Selection and Hyperparameter optimization (CASH), which mitigates the model evaluation bias on the labeled data. Further, we propose an active query strategy to label the most valuable examples. The strategy on one hand favors discriminative data to help CASH search the best model, and on the other hand prefers informative examples to accelerate the convergence of winner models. Extensive experiments are conducted on 12 openML datasets. The results demonstrate the proposed method can effectively learn a superior model with less labeled examples.


Land ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 665
Author(s):  
Xin Cheng ◽  
Sylvie Van Damme ◽  
Pieter Uyttenhove

Landscape architects play a significant role in safeguarding urban landscapes and human well-being by means of design and they call for practical knowledge, skills, and methods to address increasing environmental pressure. Cultural ecosystem services (CES) are recognized as highly related to landscape architecture (LA) studies, and the outcomes of CES evaluations have the potential to support LA practice. However, few efforts have focused on systematically investigating CES in LA studies. Additionally, how CES evaluations are performed in LA studies is rarely researched. This study aims to identify the challenges and provide recommendations for applying CES evaluations to LA practice, focusing specifically on LA design. To conclude, three challenges are identified, namely a lack of consistent concepts (conceptual challenge); a lack of CES evaluation methods to inform designs (methodological challenge); and practical issues of transferring CES evaluations to LA design (practical challenge). Based on our findings, we highlight using CES as a common term to refer to socio-cultural values and encourage more CES evaluation methods to be developed and tested for LA design. In addition, we encourage more studies to explore the links of CES and landscape features and address other practical issues to better transfer CES evaluations onto LA designs.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2716
Author(s):  
Sri Harsha Turlapati ◽  
Dino Accoto ◽  
Domenico Campolo

Localisation of geometric features like holes, edges, slots, etc. is vital to robotic planning in industrial automation settings. Low-cost 3D scanners are crucial in terms of improving accessibility, but pose a practical challenge to feature localisation because of poorer resolution and consequently affect robotic planning. In this work, we address the possibility of enhancing the quality of a 3D scan by a manual ’touch-up’ of task-relevant features, to ensure their automatic detection prior to automation. We propose a framework whereby the operator (i) has access to both the actual work-piece and its 3D scan; (ii) evaluates the missing salient features from the scan; (iii) uses a haptic stylus to physically interact with the actual work-piece, around such specific features; (iv) interactively updates the scan using the position and force information from the haptic stylus. The contribution of this work is the use of haptic mismatch for geometric update. Specifically, the geometry from the 3D scan is used to predict haptic feedback at a point on the work-piece surface. The haptic mismatch is derived as a measure of error between this prediction and the real interaction forces from physical contact at that point on the work-piece. The geometric update is driven until the haptic mismatch is minimised. Convergence of the proposed algorithm is first numerically verified on an analytical surface with simulated physical interaction. Error analysis of the surface position and orientations were also plotted. Experiments were conducted using a motion capture system providing sub-mm accuracy in position and a 6 axis F/T sensor. Missing features are successfully detected after the update of the scan using the proposed method in an experiment.


Sign in / Sign up

Export Citation Format

Share Document