neighbourhood information
Recently Published Documents


TOTAL DOCUMENTS

31
(FIVE YEARS 11)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Jaap Nieuwenhuis ◽  
Tom Kleinepier ◽  
Heleen Janssen ◽  
Maarten van Ham

AbstractWe studied the relation between cumulative exposure to neighbourhood deprivation and adolescents’ Big Five personality traits, and the moderating role of personality in the relation between neighbourhood deprivation and the development of problem behaviour and educational attainment. We studied 5365 British adolescents from ages 10 to 16, with neighbourhood information from birth onwards. Extraversion, agreeableness, emotional stability, and openness to experience moderated the relation between deprivation and problem behaviour. For educational attainment, only extraversion was a moderator. This means that higher values on personality traits were related to weaker relations between neighbourhood deprivation and problem behaviour and educational attainment. The results showed the importance of taking into account adolescents’ personality when assessing developmental outcomes in relation to neighbourhood deprivation.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 228
Author(s):  
Ezekiel Mensah Martey ◽  
Hang Lei ◽  
Xiaoyu Li ◽  
Obed Appiah

Image representation plays a vital role in the realisation of Content-Based Image Retrieval (CBIR) system. The representation is performed because pixel-by-pixel matching for image retrieval is impracticable as a result of the rigid nature of such an approach. In CBIR therefore, colour, shape and texture and other visual features are used to represent images for effective retrieval task. Among these visual features, the colour and texture are pretty remarkable in defining the content of the image. However, combining these features does not necessarily guarantee better retrieval accuracy due to image transformations such rotation, scaling, and translation that an image would have gone through. More so, concerns about feature vector representation taking ample memory space affect the running time of the retrieval task. To address these problems, we propose a new colour scheme called Stack Colour Histogram (SCH) which inherently extracts colour and neighbourhood information into a descriptor for indexing images. SCH performs recurrent mean filtering of the image to be indexed. The recurrent blurring in this proposed method works by repeatedly filtering (transforming) the image. The output of a transformation serves as the input for the next transformation, and in each case a histogram is generated. The histograms are summed up bin-by-bin and the resulted vector used to index the image. The image blurring process uses pixel’s neighbourhood information, making the proposed SCH exhibit the inherent textural information of the image that has been indexed. The SCH was extensively tested on the Coil100, Outext, Batik and Corel10K datasets. The Coil100, Outext, and Batik datasets are generally used to assess image texture descriptors, while Corel10K is used for heterogeneous descriptors. The experimental results show that our proposed descriptor significantly improves retrieval and classification rate when compared with (CMTH, MTH, TCM, CTM and NRFUCTM) which are the start-of-the-art descriptors for images with textural features.


2021 ◽  
Author(s):  
Lizhi Zhang ◽  
Zhiquan Lai ◽  
Feng Liu ◽  
Zhejiang Ran

Graph neural networks (GNNs) have been emerging as powerful learning tools for recommendation systems, social networks and knowledge graphs. In these domains, the scale of graph data is immense, so that distributed graph learning is required for efficient GNNs training. Graph partition-based methods are widely adopted to scale the graph training. However, most of the previous works focus on scalability other than the accuracy and are not thoroughly evaluated on large-scale graphs. In this paper, we introduce ADGraph (accurate and distributed training on large graphs), exploring how to improve accuracy while keeping large-scale graph training scalability. Firstly, to maintain complete neighbourhood information of the training nodes after graph partitioning, we assign l-hop neighbours of the training nodes to the same partition. We also analyse the accuracy and runtime performance of graph training, with different l-hop settings. Secondly, multi-layer neighbourhood sampling is performed on each partition, so that the mini-batch generated can accurately train target nodes. We study the relationship between convergence accuracy and the sampled layers. We also find that partial neighbourhood sampling can achieve better performance than full neighbourhood sampling. Thirdly, to further overcome the generalization error caused by large-batch training, we choose to reduce batchsize after graph partitioned and apply the linear scaling rule in distributed optimization. We evaluate ADGraph using GraphSage and GAT models with ogbn-products and Reddit datasets on 32 GPUs. Experimental results show that ADGraph achieves better performance than the benchmark accuracy of GraphSage and GAT, while getting 24-29 times speedup on 32 GPUs.


2021 ◽  
Author(s):  
Yang Yang ◽  
Hongjian Sun ◽  
Yu Zhang ◽  
Tiefu Zhang ◽  
Jialei Gong ◽  
...  

AbstractTranscriptome profiling and differential gene expression constitute a ubiquitous tool in biomedical research and clinical application. Linear dimensionality reduction methods especially principal component analysis (PCA) are widely used in detecting sample-to-sample heterogeneity in bulk transcriptomic datasets so that appropriate analytic methods can be used to correct batch effects, remove outliers and distinguish subgroups. In response to the challenge in analysing transcriptomic datasets with large sample size such as single-cell RNA-sequencing (scRNA-seq), non-linear dimensionality reduction methods were developed. t-distributed stochastic neighbour embedding (t-SNE) and uniform manifold approximation and projection (UMAP) show the advantage of preserving local information among samples and enable effective identification of heterogeneity and efficient organisation of clusters in scRNA-seq analysis. However, the utility of t-SNE and UMAP in bulk transcriptomic analysis has not been carefully examined. Therefore, we compared major dimensionality reduction methods (linear: PCA; nonlinear: multidimensional scaling (MDS), t-SNE, and UMAP) in analysing 71 bulk transcriptomic datasets with large sample sizes. UMAP was found superior in preserving sample level neighbourhood information and maintaining clustering accuracy, thus conspicuously differentiating batch effects, identifying pre-defined biological groups and revealing in-depth clustering structures. We further verified that new clustering structures visualised by UMAP were associated with biological features and clinical meaning. Therefore, we recommend the adoption of UMAP in visualising and analysing of sizable bulk transcriptomic datasets.


2020 ◽  
Vol 13 (1) ◽  
pp. 106
Author(s):  
Wenping Ma ◽  
Jiliang Zhao ◽  
Hao Zhu ◽  
Jianchao Shen ◽  
Licheng Jiao ◽  
...  

Recently, with the popularity of space-borne earth satellites, the resolution of high-resolution panchromatic (PAN) and multispectral (MS) remote sensing images is also increasing year by year, multiresolution remote sensing classification has become a research hotspot. In this paper, from the perspective of deep learning, we design a dual-branch interactive spatial-channel collaborative attention enhancement network (SCCA-net) for multiresolution classification. It aims to combine sample enhancement and feature enhancement to improve classification accuracy. In the part of sample enhancement, we propose an adaptive neighbourhood transfer sampling strategy (ANTSS). Different from the traditional pixel-centric sampling strategy with orthogonal sampling angle, our algorithm allows each patch to adaptively transfer the neighbourhood range by finding the homogeneous region of the pixel to be classified. And it also adaptively adjust the sampling angle according to the texture distribution of the homogeneous region to capture neighbourhood information that is more conducive for classification. Moreover, in the part of feature enhancement part, we design a local spatial attention module (LSA-module) for PAN data to highlight the spatial resolution advantages and a global channel attention module (GCA-module) for MS data to improve the multi-channel representation. It not only highlights the spatial resolution advantage of PAN data and the multi-channel advantage of MS data, but also improves the difference between features through the interaction between the two modules. Quantitative and qualitative experimental results verify the robustness and effectiveness of the method.


2020 ◽  
Vol 12 (13) ◽  
pp. 2141
Author(s):  
Ronghua Shang ◽  
Pei Peng ◽  
Fanhua Shang ◽  
Licheng Jiao ◽  
Yifei Shen ◽  
...  

In recent years, regional algorithms have shown great potential in the field of synthetic aperture radar (SAR) image segmentation. However, SAR images have a variety of landforms and a landform with complex texture is difficult to be divided as a whole. Due to speckle noise, traditional over-segmentation algorithm may cause mixed superpixels with different labels. They are usually located adjacent to two areas or contain more noise. In this paper, a new semantic segmentation method of SAR images based on texture complexity analysis and key superpixels is proposed. Texture complexity analysis is performed and on this basis, mixed superpixels are selected as key superpixels. Specifically, the texture complexity of the input image is calculated by a new method. Then a new superpixels generation method called neighbourhood information simple linear iterative clustering (NISLIC) is used to over-segment the image. For images with high texture complexity, the complex areas are first separated and key superpixels are selected according to certain rules. For images with low texture complexity, key superpixels are directly extracted. Finally, the superpixels are pre-segmented by fuzzy clustering based on the extracted features and the key superpixels are processed at the pixel level to obtain the final result. The effectiveness of this method has been successfully verified on several kinds of images. Comparing with the state-of-the-art algorithms, the proposed algorithm can more effectively distinguish different landforms and suppress the influence of noise, so as to achieve semantic segmentation of SAR images.


2019 ◽  
Vol 11 (16) ◽  
pp. 1854 ◽  
Author(s):  
Sahel Mahdavi ◽  
Bahram Salehi ◽  
Weimin Huang ◽  
Meisam Amani ◽  
Brian Brisco

Change detection using Remote Sensing (RS) techniques is valuable in numerous applications, including environmental management and hazard monitoring. Synthetic Aperture Radar (SAR) images have proven to be even more effective in this regard because of their all-weather, day and night acquisition capabilities. In this study, a polarimetric index based on the ratio of span (total power) values was introduced, in which neighbourhood information was considered. The role of the central pixel and its neighbourhood was adjusted using a weight parameter. The proposed index was applied to detect flooded areas in Dongting Lake, Hunan, China, and was then compared with the Wishart Maximum Likelihood Ratio (MLR) test. Results demonstrated that although the proposed index and the Wishart MLR test yielded similar accuracies (accuracy of 94% and 93%, and Kappa Coefficients of 0.82 and 0.86, respectively), inclusion of neighbourhood information in the proposed index not only increased the connectedness and decreased the noise associated with the objects within the produced map, but also increased the consistency and confidence of the results.


Sign in / Sign up

Export Citation Format

Share Document