scholarly journals EmbedMask: Embedding Coupling for Instance Segmentation

Author(s):  
Hui Ying ◽  
Zhaojin Huang ◽  
Shu Liu ◽  
Tianjia Shao ◽  
Kun Zhou

Current instance segmentation methods can be categorized into segmentation-based methods and proposal-based methods. The former performs segmentation first and then does clustering, while the latter detects objects first and then predicts the mask for each object proposal. In this work, we propose a single-stage method, named EmbedMask, that unifies both methods by taking their advantages, so it can achieve good performance in instance segmentation and produce high-resolution masks in a high speed. EmbedMask introduces two newly defined embeddings for mask prediction, which are pixel embedding and proposal embedding. During training, we enforce the pixel embedding to be close to its coupled proposal embedding if they belong to the same instance. During inference, pixels are assigned to the mask of the proposal if their embeddings are similar. This mechanism brings several benefits. First, the pixel-level clustering enables EmbedMask to generate high-resolution masks and avoids the complicated two-stage mask prediction. Second, the existence of proposal embedding simplifies and strengthens the clustering procedure, so our method can achieve high speed and better performance than segmentation-based methods. Without any bell or whistle, EmbedMask outperforms the state-of-the-art instance segmentation method Mask R-CNN on the challenging COCO dataset, obtaining more detailed masks at a higher speed.

2015 ◽  
pp. 1933-1955
Author(s):  
Tolga Soyata ◽  
He Ba ◽  
Wendi Heinzelman ◽  
Minseok Kwon ◽  
Jiye Shi

With the recent advances in cloud computing and the capabilities of mobile devices, the state-of-the-art of mobile computing is at an inflection point, where compute-intensive applications can now run on today's mobile devices with limited computational capabilities. This is achieved by using the communications capabilities of mobile devices to establish high-speed connections to vast computational resources located in the cloud. While the execution scheme based on this mobile-cloud collaboration opens the door to many applications that can tolerate response times on the order of seconds and minutes, it proves to be an inadequate platform for running applications demanding real-time response within a fraction of a second. In this chapter, the authors describe the state-of-the-art in mobile-cloud computing as well as the challenges faced by traditional approaches in terms of their latency and energy efficiency. They also introduce the use of cloudlets as an approach for extending the utility of mobile-cloud computing by providing compute and storage resources accessible at the edge of the network, both for end processing of applications as well as for managing the distribution of applications to other distributed compute resources.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1274 ◽  
Author(s):  
Md. Atiqur Rahman ◽  
Mohamed Hamada

Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them. As an example, hospitals produce a huge amount of data on a daily basis, which makes a big challenge to store it in a limited storage or to communicate them through the restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in data compression and communication theory to deal with such challenges. Such research responds to the requirements of data transmission at high speed over networks. In this paper, we focus on deep analysis of the most common techniques in image compression. We present a detailed analysis of run-length, entropy and dictionary based lossless image compression algorithms with a common numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed based on some bench-marked images. Finally, we use standard metrics such as average code length (ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and decoding time (DT) in order to measure the performance of the state-of-the-art techniques.


2020 ◽  
Vol 34 (07) ◽  
pp. 12637-12644 ◽  
Author(s):  
Yibo Yang ◽  
Hongyang Li ◽  
Xia Li ◽  
Qijie Zhao ◽  
Jianlong Wu ◽  
...  

The panoptic segmentation task requires a unified result from semantic and instance segmentation outputs that may contain overlaps. However, current studies widely ignore modeling overlaps. In this study, we aim to model overlap relations among instances and resolve them for panoptic segmentation. Inspired by scene graph representation, we formulate the overlapping problem as a simplified case, named scene overlap graph. We leverage each object's category, geometry and appearance features to perform relational embedding, and output a relation matrix that encodes overlap relations. In order to overcome the lack of supervision, we introduce a differentiable module to resolve the overlap between any pair of instances. The mask logits after removing overlaps are fed into per-pixel instance id classification, which leverages the panoptic supervision to assist in the modeling of overlap relations. Besides, we generate an approximate ground truth of overlap relations as the weak supervision, to quantify the accuracy of overlap relations predicted by our method. Experiments on COCO and Cityscapes demonstrate that our method is able to accurately predict overlap relations, and outperform the state-of-the-art performance for panoptic segmentation. Our method also won the Innovation Award in COCO 2019 challenge.


2015 ◽  
Vol 24 (07) ◽  
pp. 1550101 ◽  
Author(s):  
Raouf Senhadji-Navaro ◽  
Ignacio Garcia-Vargas

This work is focused on the problem of designing efficient reconfigurable multiplexer banks for RAM-based implementations of reconfigurable state machines. We propose a new architecture (called combination-based reconfigurable multiplexer bank, CRMUX) that use multiplexers simpler than that of the state-of-the-art architecture (called variation-based reconfigurable multiplexer bank, VRMUX). The performance (in terms of speed, area and reconfiguration cost) of both architectures is compared. Experimental results from MCNC finite state machine (FSM) benchmarks show that CRMUX is faster and more area-efficient than VRMUX. The reconfiguration cost of both multiplexer banks is studied using a behavioral model of a reconfigurable state machine. The results show that the reconfiguration cost of CRMUX is lower than that of VRMUX in most cases.


2020 ◽  
Vol 10 (16) ◽  
pp. 5427
Author(s):  
Philippe Lambin

Like in many countries, research devoted to nanosciences in Belgium grew up after high-resolution electron microscopy and local probe microscopic tools became available [...]


2011 ◽  
Vol 204-210 ◽  
pp. 1336-1341
Author(s):  
Zhi Gang Xu ◽  
Xiu Qin Su

Super-resolution (SR) restoration produces one or a set of high resolution images from low-resolution observations. In particular, SR restoration involves many multidisciplinary studies. A review on recent SR restoration approaches was given in this paper. First, we introduced the characteristics and framework of SR restoration. The state of the art in SR restoration was surveyed by taxonomy. Then we summarized and analyzed the existing algorithms of registration and reconstruction. A comparison of performing differences between these methods would only be valid given. After that we discussed the SR problems of color images and compressed videos. At last, we concluded with some thoughts about future directions.


2020 ◽  
Author(s):  
Tahir Mahmood ◽  
Muhammad Owais ◽  
Kyoung Jun Noh ◽  
Hyo Sik Yoon ◽  
Adnan Haider ◽  
...  

BACKGROUND Accurate nuclei segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intra-class variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast, robust, and require less human effort, can be used. Recently, several AI-based nuclei segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclei segmentation technique in which we adopt a new nuclei segmentation network empowered by residual skip connections to address this issue. OBJECTIVE The aim of this study was to develop an AI-based nuclei segmentation method for histopathology images of multiple organs. METHODS Our proposed residual-skip-connections-based nuclei segmentation network (R-NSN) is comprised of two main stages: Stain normalization and nuclei segmentation as shown in Figure 2. In the 1st stage, a histopathology image is stain normalized to balance the color and intensity variation. Subsequently, it is used as an input to the R-NSN in stage 2, which outputs a segmented image. RESULTS Experiments were performed on two publicly available datasets: 1) The Cancer Genomic Atlas (TCGA), and 2) Triple-negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on the TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods. CONCLUSIONS The proposed R-NSN has the potential to maintain crucial features by using the residual connectivity from the encoder to the decoder and uses only a few layers, which reduces the computational cost of the model. The selection of a good stain normalization technique, the effective use of residual connections to avoid information loss, and the use of only a few layers to reduce the computational cost yielded outstanding results. Thus, our nuclei segmentation method is robust and is superior to the state-of-the-art methods. We expect that this study will contribute to the development of computational pathology software for research and clinical use and enhance the impact of computational pathology.


Author(s):  
TianJiao Xie ◽  
Bo Li ◽  
Mao Yang ◽  
Zhongjiang Yan

A multi-rate LDPC decoder architecture for DVB-S2 codes based on FPGA is proposed. Through elementary transformation on the parity check matrices of DVB-S2 LDPC codes, a new matrix whose left is a QC sub-matrix and right is Transformation of Staircase lower triangular (TST) sub-matrix is obtained. The QC and TST are designed separately, therefore the successful experience of the most popular Quasi-Cyclic (QC) LDPC decoder architecture can be drawn on. While for TST sub-matrix, the variable nodes updating only need to be considered and the check nodes updating is realized compatibility with QC sub-matrix. Based on the proposed architectures, a multi-rate LDPC decoder implemented on Xilinx XC7VX485T FPGA can achieve the maximum decoding throughput of 2.5 Gbit/s at the 20 iterations when the operating frequency is 250 MHz, which demonstrates the highest throughput compared with the state-of-the-art works.


Sign in / Sign up

Export Citation Format

Share Document