CEMAB: A Cross-Entropy-based Method for Large-Scale Multi-Armed Bandits

Author(s):  
Erli Wang ◽  
Hanna Kurniawati ◽  
Dirk P. Kroese
Keyword(s):  
2021 ◽  
Vol 13 (23) ◽  
pp. 4786
Author(s):  
Zhen Wang ◽  
Nannan Wu ◽  
Xiaohan Yang ◽  
Bingqi Yan ◽  
Pingping Liu

As satellite observation technology rapidly develops, the number of remote sensing (RS) images dramatically increases, and this leads RS image retrieval tasks to be more challenging in terms of speed and accuracy. Recently, an increasing number of researchers have turned their attention to this issue, as well as hashing algorithms, which map real-valued data onto a low-dimensional Hamming space and have been widely utilized to respond quickly to large-scale RS image search tasks. However, most existing hashing algorithms only emphasize preserving point-wise or pair-wise similarity, which may lead to an inferior approximate nearest neighbor (ANN) search result. To fix this problem, we propose a novel triplet ordinal cross entropy hashing (TOCEH). In TOCEH, to enhance the ability of preserving the ranking orders in different spaces, we establish a tensor graph representing the Euclidean triplet ordinal relationship among RS images and minimize the cross entropy between the probability distribution of the established Euclidean similarity graph and that of the Hamming triplet ordinal relation with the given binary code. During the training process, to avoid the non-deterministic polynomial (NP) hard problem, we utilize a continuous function instead of the discrete encoding process. Furthermore, we design a quantization objective function based on the principle of preserving triplet ordinal relation to minimize the loss caused by the continuous relaxation procedure. The comparative RS image retrieval experiments are conducted on three publicly available datasets, including UC Merced Land Use Dataset (UCMD), SAT-4 and SAT-6. The experimental results show that the proposed TOCEH algorithm outperforms many existing hashing algorithms in RS image retrieval tasks.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 449 ◽  
Author(s):  
Xian-Qin Ma ◽  
Chong-Chong Yu ◽  
Xiu-Xin Chen ◽  
Lan Zhou

Person re-identification in the image processing domain has been a challenging research topic due to the influence of pedestrian posture, background, lighting, and other factors. In this paper, the method of harsh learning is applied in person re-identification, and we propose a person re-identification method based on deep hash learning. By improving the conventional method, the method proposed in this paper uses an easy-to-optimize shallow convolutional neural network to learn the inherent implicit relationship of the image and then extracts the deep features of the image. Then, a hash layer with three-step calculation is incorporated in the fully connected layer of the network. The hash function is learned and mapped into a hash code through the connection between the network layers. The generation of the hash code satisfies the requirements that minimize the error of the sum of quantization loss and Softmax regression cross-entropy loss, which achieve the end-to-end generation of hash code in the network. After obtaining the hash code through the network, the distance between the pedestrian image hash code to be retrieved and the pedestrian image hash code library is calculated to implement the person re-identification. Experiments conducted on multiple standard datasets show that our deep hashing network achieves the comparable performances and outperforms other hashing methods with large margins on Rank-1 and mAP value identification rates in pedestrian re-identification. Besides, our method is predominant in the efficiency of training and retrieval in contrast to other pedestrian re-identification algorithms.


Author(s):  
Lujun Zhao ◽  
Qi Zhang ◽  
Peng Wang ◽  
Xiaoyu Liu

Most existing Chinese word segmentation (CWS) methods are usually supervised. Hence, large-scale annotated domain-specific datasets are needed for training. In this paper, we seek to address the problem of CWS for the resource-poor domains that lack annotated data. A novel neural network model is proposed to incorporate unlabeled and partially-labeled data. To make use of unlabeled data, we combine a bidirectional LSTM segmentation model with two character-level language models using a gate mechanism. These language models can capture co-occurrence information. To make use of partially-labeled data, we modify the original cross entropy loss function of RNN. Experimental results demonstrate that the method performs well on CWS tasks in a series of domains.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0255939
Author(s):  
Sibaji Gaj ◽  
Daniel Ontaneda ◽  
Kunio Nakamura

Gadolinium-enhancing lesions reflect active disease and are critical for in-patient monitoring in multiple sclerosis (MS). In this work, we have developed the first fully automated method to segment and count the gadolinium-enhancing lesions from routine clinical MRI of MS patients. The proposed method first segments the potential lesions using 2D-UNet from multi-channel scans (T1 post-contrast, T1 pre-contrast, FLAIR, T2, and proton-density) and classifies the lesions using a random forest classifier. The algorithm was trained and validated on 600 MRIs with manual segmentation. We compared the effect of loss functions (Dice, cross entropy, and bootstrapping cross entropy) and number of input contrasts. We compared the lesion counts with those by radiologists using 2,846 images. Dice, lesion-wise sensitivity, and false discovery rate with full 5 contrasts were 0.698, 0.844, and 0.307, which improved to 0.767, 0.969, and 0.00 in large lesions (>100 voxels). The model using bootstrapping loss function provided a statistically significant increase of 7.1% in sensitivity and of 2.3% in Dice compared with the model using cross entropy loss. T1 post/pre-contrast and FLAIR were the most important contrasts. For large lesions, the 2D-UNet model trained using T1 pre-contrast, FLAIR, T2, PD had a lesion-wise sensitivity of 0.688 and false discovery rate 0.083, even without T1 post-contrast. For counting lesions in 2846 routine MRI images, the model with 2D-UNet and random forest, which was trained with bootstrapping cross entropy, achieved accuracy of 87.7% using T1 pre-contrast, T1 post-contrast, and FLAIR when lesion counts were categorized as 0, 1, and 2 or more. The model performs well in routine non-standardized MRI datasets, allows large-scale analysis of clinical datasets, and may have clinical applications.


2021 ◽  
Vol 38 (10) ◽  
pp. 100301
Author(s):  
Yangsen Ye ◽  
Sirui Cao ◽  
Yulin Wu ◽  
Xiawei Chen ◽  
Qingling Zhu ◽  
...  

High-fidelity two-qubit gates are essential for the realization of large-scale quantum computation and simulation. Tunable coupler design is used to reduce the problem of parasitic coupling and frequency crowding in many-qubit systems and thus thought to be advantageous. Here we design an extensible 5-qubit system in which center transmon qubit can couple to every four near-neighboring qubits via a capacitive tunable coupler and experimentally demonstrate high-fidelity controlled-phase (CZ) gate by manipulating central qubit and one near-neighboring qubit. Speckle purity benchmarking and cross entropy benchmarking are used to assess the purity fidelity and the fidelity of the CZ gate. The average purity fidelity of the CZ gate is 99.69±0.04% and the average fidelity of the CZ gate is 99.65±0.04%, which means that the control error is about 0.04%. Our work is helpful for resolving many challenges in implementation of large-scale quantum systems.


1999 ◽  
Vol 173 ◽  
pp. 243-248
Author(s):  
D. Kubáček ◽  
A. Galád ◽  
A. Pravda

AbstractUnusual short-period comet 29P/Schwassmann-Wachmann 1 inspired many observers to explain its unpredictable outbursts. In this paper large scale structures and features from the inner part of the coma in time periods around outbursts are studied. CCD images were taken at Whipple Observatory, Mt. Hopkins, in 1989 and at Astronomical Observatory, Modra, from 1995 to 1998. Photographic plates of the comet were taken at Harvard College Observatory, Oak Ridge, from 1974 to 1982. The latter were digitized at first to apply the same techniques of image processing for optimizing the visibility of features in the coma during outbursts. Outbursts and coma structures show various shapes.


1994 ◽  
Vol 144 ◽  
pp. 29-33
Author(s):  
P. Ambrož

AbstractThe large-scale coronal structures observed during the sporadically visible solar eclipses were compared with the numerically extrapolated field-line structures of coronal magnetic field. A characteristic relationship between the observed structures of coronal plasma and the magnetic field line configurations was determined. The long-term evolution of large scale coronal structures inferred from photospheric magnetic observations in the course of 11- and 22-year solar cycles is described.Some known parameters, such as the source surface radius, or coronal rotation rate are discussed and actually interpreted. A relation between the large-scale photospheric magnetic field evolution and the coronal structure rearrangement is demonstrated.


2000 ◽  
Vol 179 ◽  
pp. 205-208
Author(s):  
Pavel Ambrož ◽  
Alfred Schroll

AbstractPrecise measurements of heliographic position of solar filaments were used for determination of the proper motion of solar filaments on the time-scale of days. The filaments have a tendency to make a shaking or waving of the external structure and to make a general movement of whole filament body, coinciding with the transport of the magnetic flux in the photosphere. The velocity scatter of individual measured points is about one order higher than the accuracy of measurements.


Author(s):  
Simon Thomas

Trends in the technology development of very large scale integrated circuits (VLSI) have been in the direction of higher density of components with smaller dimensions. The scaling down of device dimensions has been not only laterally but also in depth. Such efforts in miniaturization bring with them new developments in materials and processing. Successful implementation of these efforts is, to a large extent, dependent on the proper understanding of the material properties, process technologies and reliability issues, through adequate analytical studies. The analytical instrumentation technology has, fortunately, kept pace with the basic requirements of devices with lateral dimensions in the micron/ submicron range and depths of the order of nonometers. Often, newer analytical techniques have emerged or the more conventional techniques have been adapted to meet the more stringent requirements. As such, a variety of analytical techniques are available today to aid an analyst in the efforts of VLSI process evaluation. Generally such analytical efforts are divided into the characterization of materials, evaluation of processing steps and the analysis of failures.


Sign in / Sign up

Export Citation Format

Share Document