scholarly journals HDR-Net-Fusion: Real-time 3D dynamic scene reconstruction with a hierarchical deep reinforcement network

Author(s):  
Haoxuan Song ◽  
Jiahui Huang ◽  
Yan-Pei Cao ◽  
Tai-Jiang Mu

AbstractReconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics, computer vision, and robotics. However, due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information, traditional approaches often produce low-quality geometry with holes, bumps, and misalignments. We propose a novel 3D dynamic reconstruction system, named HDR-Net-Fusion, which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels, using a hierarchical deep reinforcement (HDR) network. The latter comprises two parts: a global HDR-Net which rapidly detects local regions with large geometric errors, and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions. Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality. The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset. Our method can reconstruct geometry with higher quality than traditional methods.

2019 ◽  
Vol 19 (1) ◽  
pp. 4-16 ◽  
Author(s):  
Qihui Wu ◽  
Hanzhong Ke ◽  
Dongli Li ◽  
Qi Wang ◽  
Jiansong Fang ◽  
...  

Over the past decades, peptide as a therapeutic candidate has received increasing attention in drug discovery, especially for antimicrobial peptides (AMPs), anticancer peptides (ACPs) and antiinflammatory peptides (AIPs). It is considered that the peptides can regulate various complex diseases which are previously untouchable. In recent years, the critical problem of antimicrobial resistance drives the pharmaceutical industry to look for new therapeutic agents. Compared to organic small drugs, peptide- based therapy exhibits high specificity and minimal toxicity. Thus, peptides are widely recruited in the design and discovery of new potent drugs. Currently, large-scale screening of peptide activity with traditional approaches is costly, time-consuming and labor-intensive. Hence, in silico methods, mainly machine learning approaches, for their accuracy and effectiveness, have been introduced to predict the peptide activity. In this review, we document the recent progress in machine learning-based prediction of peptides which will be of great benefit to the discovery of potential active AMPs, ACPs and AIPs.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 41
Author(s):  
Tim Jurisch ◽  
Stefan Cantré ◽  
Fokke Saathoff

A variety of studies recently proved the applicability of different dried, fine-grained dredged materials as replacement material for erosion-resistant sea dike covers. In Rostock, Germany, a large-scale field experiment was conducted, in which different dredged materials were tested with regard to installation technology, stability, turf development, infiltration, and erosion resistance. The infiltration experiments to study the development of a seepage line in the dike body showed unexpected measurement results. Due to the high complexity of the problem, standard geo-hydraulic models proved to be unable to analyze these results. Therefore, different methods of inverse infiltration modeling were applied, such as the parameter estimation tool (PEST) and the AMALGAM algorithm. In the paper, the two approaches are compared and discussed. A sensitivity analysis proved the presumption of a non-linear model behavior for the infiltration problem and the Eigenvalue ratio indicates that the dike infiltration is an ill-posed problem. Although this complicates the inverse modeling (e.g., termination in local minima), parameter sets close to an optimum were found with both the PEST and the AMALGAM algorithms. Together with the field measurement data, this information supports the rating of the effective material properties of the applied dredged materials used as dike cover material.


2021 ◽  
Vol 13 (19) ◽  
pp. 3994
Author(s):  
Lu Xu ◽  
Hong Zhang ◽  
Chao Wang ◽  
Sisi Wei ◽  
Bo Zhang ◽  
...  

The elimination of hunger is the top concern for developing countries and is the key to maintain national stability and security. Paddy rice occupies an essential status in food supply, whose accurate monitoring is of great importance for human sustainable development. As one of the most important paddy rice production countries in the world, Thailand has a favorable hot and humid climate for paddy rice growing, but the growth patterns of paddy rice are too complicated to construct promising growth models for paddy rice discrimination. To solve this problem, this study proposes a large-scale paddy rice mapping scheme, which uses time-series Sentinel-1 data to generate a convincing annual paddy rice map of Thailand. The proposed method extracts temporal statistical features of the time-series SAR images to overcome the intra-class variability due to different management practices and modifies the U-Net model with the fully connected Conditional Random Field (CRF) to maintain the edge of the fields. In this study, 758 Sentinel-1 images that covered the whole country from the end of 2018 to 2019 were acquired to generate the annual paddy rice map. The accuracy, precision, and recall of the resultant paddy rice map reached 91%, 87%, and 95%, respectively. Compared to SVM classifier and the U-Net model based on feature selection strategy (FS-U-Net), the proposed scheme achieved the best overall performance, which demonstrated the capability of overcoming the complex cultivation conditions and accurately identifying the fragmented paddy rice fields in Thailand. This study provides a promising tool for large-scale paddy rice monitoring in tropical production regions and has great potential in the global sustainable development of food and environment management.


2021 ◽  
Author(s):  
◽  
Yu Ren

<p>Spectrum today is regulated based on fixed licensees. In the past radio operators have been allocated a frequency band for exclusive use. This has become problem for new users and the modern explosion in wireless services that, having arrived late find there is a scarcity in the remaining available spectrum. Cognitive radio (CR) presents a solution. CRs combine intelligence, spectrum sensing and software reconfigurable radio capabilities. This allows them to opportunistically transmit among several licensed bands for seamless communications, switching to another channel when a licensee is sensed in the original band without causing interference. Enabling this is an intelligent dynamic channel selection strategy capable of finding the best quality channel to transmit on that suffers from the least licensee interruption. This thesis evaluates a Q-learning channel selection scheme using an experimental approach. A cognitive radio deploying the scheme is implemented on GNU Radio and its performance is measured among channels with different utilizations in terms of its packet transmission success rate, goodput and interference caused. We derive similar analytical expressions in the general case of large-scale networks. Our results show that using the Q-learning scheme for channel selection significantly improves the goodput and packet transmission success rate of the system.</p>


2015 ◽  
Vol 27 (9) ◽  
pp. 2305-2319 ◽  
Author(s):  
Guilherme Dal Bianco ◽  
Renata Galante ◽  
Marcos Andre Goncalves ◽  
Sergio Canuto ◽  
Carlos A. Heuser

Author(s):  
Silvio Barra ◽  
Maria De Marsico ◽  
Chiara Galdi

In this chapter, the authors present some issues related to automatic face image tagging techniques. Their main purpose in user applications is to support the organization (indexing) and retrieval (or easy browsing) of images or videos in large collections. Their core modules include algorithms and strategies for handling very large face databases, mostly acquired in real conditions. As a background for understanding how automatic face tagging works, an overview about face recognition techniques is given, including both traditional approaches and novel proposed techniques for face recognition in uncontrolled settings. Moreover, some applications and the way they work are summarized, in order to depict the state of the art in this area of face recognition research. Actually, many of them are used to tag faces and to organize photo albums with respect to the person(s) presented in annotated photos. This kind of activity has recently expanded from personal devices to social networks, and can also significantly support more demanding tasks, such as automatic handling of large editorial collections for magazine publishing and archiving. Finally, a number of approaches to large-scale face datasets as well as some automatic face image tagging techniques are presented and compared. The authors show that many approaches, both in commercial and research applications, still provide only a semi-automatic solution for this problem.


2012 ◽  
pp. 232-259
Author(s):  
Eddy Caron ◽  
Frédéric Desprez ◽  
Franck Petit ◽  
Cédric Tedeschi

Within distributed computing platforms, some computing abilities (or services) are offered to clients. To build dynamic applications using such services as basic blocks, a critical prerequisite is to discover those services. Traditional approaches to the service discovery problem have historically relied upon centralized solutions, unable to scale well in large unreliable platforms. In this chapter, we will first give an overview of the state of the art of service discovery solutions based on peer-to-peer (P2P) technologies that allow such a functionality to remain efficient at large scale. We then focus on one of these approaches: the Distributed Lexicographic Placement Table (DLPT) architecture, that provide particular mechanisms for load balancing and fault-tolerance. This solution centers around three key points. First, it calls upon an indexing system structured as a prefix tree, allowing multi-attribute range queries. Second, it allows the mapping of such structures onto heterogeneous and dynamic networks and proposes some load balancing heuristics for it. Third, as our target platform is dynamic and unreliable, we describe its powerful fault-tolerance mechanisms, based on self-stabilization. Finally, we present the software prototype of this architecture and its early experiments.


Author(s):  
Xianglan Bai ◽  
Guang-Xin Huang ◽  
Xiao-Jun Lei ◽  
Lothar Reichel ◽  
Feng Yin
Keyword(s):  

Geophysics ◽  
1988 ◽  
Vol 53 (3) ◽  
pp. 375-385 ◽  
Author(s):  
R. R. B. von Frese ◽  
D. N. Ravat ◽  
W. J. Hinze ◽  
C. A. McGue

Instabilities and the large matrices which are common to inversions of regional magnetic and gravity anomalies often complicate the use of efficient least‐squares matrix procedures. Inversion stability profoundly affects anomaly analysis, and hence it must be considered in any application. Wildly varying or unstable solutions are the products of errors in the anomaly observations and the integrated effects of observation spacing, source spacing, elevation differences between sources and observations, geographic coordinate attributes, geomagnetic field attitudes, and other factors which influence the conditioning of inversion. Solution instabilities caused by ill‐posed parameters can be efficiently minimized by ridge regression with a damping factor large enough to stabilize the inversion, but small enough to produce an analytically useful solution. An effective choice for the damping factor is facilitated by plotting damping factors against residuals between observed and modeled anomalies and by then comparing this curve to curves of damping factors plotted against solution variance or the residuals between predicted anomaly maps representing the processing objective (e.g., downward continuation, differential reduction to the radial pole, etc.). To obtain accurate and efficient large‐scale inversions of anomaly data, a procedure based on the superposition principle of potential fields may be used. This method involves successive inversions of residuals between the observations and various stable model fields which can be readily accommodated by available computer memory. Integration of the model fields yields a well‐resolved representation of the observed anomalies corresponding to an integrated model which normally could not be obtained by direct inversion because the memory requirements would be excessive. MAGSAT magnetic anomaly inversions over India demonstrate the utility of these procedures for improving the geologic analysis of potential field anomalies.


Sign in / Sign up

Export Citation Format

Share Document