scholarly journals Hierarchical semantic interaction-based deep hashing network for cross-modal retrieval

2021 ◽  
Vol 7 ◽  
pp. e552
Author(s):  
Shubai Chen ◽  
Song Wu ◽  
Li Wang

Due to the high efficiency of hashing technology and the high abstraction of deep networks, deep hashing has achieved appealing effectiveness and efficiency for large-scale cross-modal retrieval. However, how to efficiently measure the similarity of fine-grained multi-labels for multi-modal data and thoroughly explore the intermediate layers specific information of networks are still two challenges for high-performance cross-modal hashing retrieval. Thus, in this paper, we propose a novel Hierarchical Semantic Interaction-based Deep Hashing Network (HSIDHN) for large-scale cross-modal retrieval. In the proposed HSIDHN, the multi-scale and fusion operations are first applied to each layer of the network. A Bidirectional Bi-linear Interaction (BBI) policy is then designed to achieve the hierarchical semantic interaction among different layers, such that the capability of hash representations can be enhanced. Moreover, a dual-similarity measurement (“hard” similarity and “soft” similarity) is designed to calculate the semantic similarity of different modality data, aiming to better preserve the semantic correlation of multi-labels. Extensive experiment results on two large-scale public datasets have shown that the performance of our HSIDHN is competitive to state-of-the-art deep cross-modal hashing methods.

2020 ◽  
Vol 1 (2) ◽  
pp. 101-123
Author(s):  
Hiroaki Shiokawa ◽  
Yasunori Futamura

This paper addressed the problem of finding clusters included in graph-structured data such as Web graphs, social networks, and others. Graph clustering is one of the fundamental techniques for understanding structures present in the complex graphs such as Web pages, social networks, and others. In the Web and data mining communities, the modularity-based graph clustering algorithm is successfully used in many applications. However, it is difficult for the modularity-based methods to find fine-grained clusters hidden in large-scale graphs; the methods fail to reproduce the ground truth. In this paper, we present a novel modularity-based algorithm, \textit{CAV}, that shows better clustering results than the traditional algorithm. The proposed algorithm employs a cohesiveness-aware vector partitioning into the graph spectral analysis to improve the clustering accuracy. Additionally, this paper also presents a novel efficient algorithm \textit{P-CAV} for further improving the clustering speed of CAV; P-CAV is an extension of CAV that utilizes the thread-based parallelization on a many-core CPU. Our extensive experiments on synthetic and public datasets demonstrate the performance superiority of our approaches over the state-of-the-art approaches.


2006 ◽  
Vol 45 ◽  
pp. 885-892 ◽  
Author(s):  
Hitoshi Sumiya

High-purity, single-phase polycrystalline diamond and cBN have been successfully synthesized by direct conversion sintering from graphite and hBN, respectively, under static high pressure and high temperature. The high-purity polycrystalline diamond synthesized directly from graphite at ≧15 GPa and 2300-2500 °C has a mixed texture of a homogeneous fine structure (grain size : 10-30 nm, formed in a diffusion process) and a lamellar structure (formed in a martensitic process). The polycrystalline diamond has very high hardness equivalent to or even higher than that of diamond crystal. The high-purity polycrystalline cBN synthesized from high-purity hBN at 7.7 GPa and 2300 °C consists of homogeneous fine-grained particles (<0.5 μm, formed in a diffusion process). The hardness of the fine-grained high-purity polycrystalline cBN is obviously higher than that of single-crystal cBN. The fine microstructure features without any secondary phases and extremely high hardness of the nano-polycrystalline diamond and the fine-grained polycrystalline cBN are promising for applications in next-generation high-precision and high-efficiency cutting tools.


2015 ◽  
Vol 4 (4) ◽  
Author(s):  
Baohua Jia

AbstractLight management plays an important role in high-performance solar cells. Nanostructures that could effectively trap light offer great potential in improving the conversion efficiency of solar cells with much reduced material usage. Developing low-cost and large-scale nanostructures integratable with solar cells, thus, promises new solutions for high efficiency and low-cost solar energy harvesting. In this paper, we review the exciting progress in this field, in particular, in the market, dominating silicon solar cells and pointing out challenges and future trends.


Crystals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 295
Author(s):  
Tianzhao Dai ◽  
Qiaojun Cao ◽  
Lifeng Yang ◽  
Mahmoud Aldamasy ◽  
Meng Li ◽  
...  

Perovskite solar cells (PSCs) have received a great deal of attention in the science and technology field due to their outstanding power conversion efficiency (PCE), which increased rapidly from 3.9% to 25.5% in less than a decade, comparable to single crystal silicon solar cells. In the past ten years, much progress has been made, e.g. impressive ideas and advanced technologies have been proposed to enlarge PSC efficiency and stability. However, this outstanding progress has always been referred to as small-area (<0.1 cm2) PSCs. Little attention has been paid to the preparation processes and their micro-mechanisms for large-area (>1 cm2) PSCs. Meanwhile, scaling up is an inevitable way for large-scale application of PSCs. Therefore, we firstly summarize the current achievements for high efficiency and stability large-area perovskite solar cells, including precursor composition, deposition, growth control, interface engineering, packaging technology, etc. Then we include a brief discussion and outlook for the future development of large-area PSCs in commercialization.


Author(s):  
Xiaoxiao Sun ◽  
Liyi Chen ◽  
Jufeng Yang

Fine-grained classification is absorbed in recognizing the subordinate categories of one field, which need a large number of labeled images, while it is expensive to label these images. Utilizing web data has been an attractive option to meet the demands of training data for convolutional neural networks (CNNs), especially when the well-labeled data is not enough. However, directly training on such easily obtained images often leads to unsatisfactory performance due to factors such as noisy labels. This has been conventionally addressed by reducing the noise level of web data. In this paper, we take a fundamentally different view and propose an adversarial discriminative loss to advocate representation coherence between standard and web data. This is further encapsulated in a simple, scalable and end-to-end trainable multi-task learning framework. We experiment on three public datasets using large-scale web data to evaluate the effectiveness and generalizability of the proposed approach. Extensive experiments demonstrate that our approach performs favorably against the state-of-the-art methods.


Author(s):  
Jiaqi Ding ◽  
Zehua Zhang ◽  
Jijun Tang ◽  
Fei Guo

Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.


2020 ◽  
Vol 34 (07) ◽  
pp. 11157-11164
Author(s):  
Sheng Jin ◽  
Shangchen Zhou ◽  
Yao Liu ◽  
Chao Chen ◽  
Xiaoshuai Sun ◽  
...  

Deep hashing methods have been proved to be effective and efficient for large-scale Web media search. The success of these data-driven methods largely depends on collecting sufficient labeled data, which is usually a crucial limitation in practical cases. The current solutions to this issue utilize Generative Adversarial Network (GAN) to augment data in semi-supervised learning. However, existing GAN-based methods treat image generations and hashing learning as two isolated processes, leading to generation ineffectiveness. Besides, most works fail to exploit the semantic information in unlabeled data. In this paper, we propose a novel Semi-supervised Self-pace Adversarial Hashing method, named SSAH to solve the above problems in a unified framework. The SSAH method consists of an adversarial network (A-Net) and a hashing network (H-Net). To improve the quality of generative images, first, the A-Net learns hard samples with multi-scale occlusions and multi-angle rotated deformations which compete against the learning of accurate hashing codes. Second, we design a novel self-paced hard generation policy to gradually increase the hashing difficulty of generated samples. To make use of the semantic information in unlabeled ones, we propose a semi-supervised consistent loss. The experimental results show that our method can significantly improve state-of-the-art models on both the widely-used hashing datasets and fine-grained datasets.


2020 ◽  
Vol 245 ◽  
pp. 05042
Author(s):  
Miha Muškinja ◽  
Paolo Calafiura ◽  
Charles Leggett ◽  
Illya Shapoval ◽  
Vakho Tsulaia

The ATLAS experiment has successfully integrated HighPerformance Computing resources (HPCs) in its production system. Unlike the current generation of HPC systems, and the LHC computing grid, the next generation of supercomputers is expected to be extremely heterogeneous in nature: different systems will have radically different architectures, and most of them will provide partitions optimized for different kinds of workloads. In this work we explore the applicability of concepts and tools realized in Ray (the high-performance distributed execution framework targeting large-scale machine learning applications) to ATLAS event throughput optimization on heterogeneous distributed resources, ranging from traditional grid clusters to Exascale computers. We present a prototype of Raythena, a Ray-based implementation of the ATLAS Event Service (AES), a fine-grained event processing workflow aimed at improving the efficiency of ATLAS workflows on opportunistic resources, specifically HPCs. The AES is implemented as an event processing task farm that distributes packets of events to several worker processes running on multiple nodes. Each worker in the task farm runs an event-processing application (Athena) as a daemon. The whole system is orchestrated by Ray, which assigns work in a distributed, possibly heterogeneous, environment. For all its flexibility, the AES implementation is currently comprised of multiple separate layers that communicate through ad-hoc command-line and filebased interfaces. The goal of Raythena is to integrate these layers through a feature-rich, efficient application framework. Besides increasing usability and robustness, a vertically integrated scheduler will enable us to explore advanced concepts such as dynamically shaping of workflows to exploit currently available resources, particularly on heterogeneous systems.


2014 ◽  
Vol 22 (2) ◽  
pp. 59-74 ◽  
Author(s):  
Alex D. Breslow ◽  
Ananta Tiwari ◽  
Martin Schulz ◽  
Laura Carrington ◽  
Lingjia Tang ◽  
...  

Co-location, where multiple jobs share compute nodes in large-scale HPC systems, has been shown to increase aggregate throughput and energy efficiency by 10–20%. However, system operators disallow co-location due to fair-pricing concerns, i.e., a pricing mechanism that considers performance interference from co-running jobs. In the current pricing model, application execution time determines the price, which results in unfair prices paid by the minority of users whose jobs suffer from co-location. This paper presents POPPA, a runtime system that enables fair pricing by delivering precise online interference detection and facilitates the adoption of supercomputers with co-locations. POPPA leverages a novel shutter mechanism – a cyclic, fine-grained interference sampling mechanism to accurately deduce the interference between co-runners – to provide unbiased pricing of jobs that share nodes. POPPA is able to quantify inter-application interference within 4% mean absolute error on a variety of co-located benchmark and real scientific workloads.


Sign in / Sign up

Export Citation Format

Share Document