Greedy Network Growth Model of Social Network Service

Author(s):  
Shohei Usui ◽  
◽  
Fujio Toriumi ◽  
Masato Matsuo ◽  
Takatsugu Hirayama ◽  
...  

As new network communication tools are developed, social network services (SNS) such as Facebook and Twitter are becoming part of a social phenomenon globally impacting on society. Many researchers are therefore studying the structure of relationship networks among users. We propose a greedy network growth model that appropriately increases nodes and links while automatically reproducing the target network. We handle a wide range of networks with high expressive ability. Results of experiments showed that we accurately reproduced 92.4% of 189 target networks from real services. The model also enabled us to reproduce 30 networks built up by existing network models. We thus show that the proposed model represents the expressiveness of many existing network models.

Author(s):  
Akiyo Nadamoto ◽  
Eiji Aramaki ◽  
Takeshi Abekawa ◽  
Yohei Murakami

Internet-based social network services (SNSs) have grown increasingly popular and are producing a great amount of content. Multiple users freely post their comments in SNS threads, and extracting the gist of these comments can be difficult due to their complicated dialog. In this paper, the authors propose a system that explores this concept of the gist of an SNS thread by comparing it with Wikipedia. The granularity of information in an SNS thread differs from that in Wikipedia articles, which implies that the information in a thread may be related to different articles on Wikipedia. The authors extract target articles on Wikipedia based on its link graph. When an SNS thread is compared with Wikipedia, the focus is on the table of contents (TOC) of the relevant Wikipedia articles. The system uses a proposed coverage degree to compare the comments in a thread with the information in the TOC. If the coverage degree is higher, the Wikipedia paragraph becomes the gist of the thread.


2020 ◽  
Vol 12 (18) ◽  
pp. 2985 ◽  
Author(s):  
Yeneng Lin ◽  
Dongyun Xu ◽  
Nan Wang ◽  
Zhou Shi ◽  
Qiuxiao Chen

Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.


2014 ◽  
Vol 513-517 ◽  
pp. 2211-2214 ◽  
Author(s):  
Wei Ren ◽  
Yu Hui Qiu

This paper studies the network model in SLN by applying the methodology of social network to a widely accepted, real-life user interactive network scenario. The data and experiments are based on micro-blogging (Sina Weibo). Results show that the statistic properties of SLN are in close analogy with that of social network. Contrary to our normal understanding, some nodes with too much semantics (especially under one category) are in decreased chances of having links from newly added nodes.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Bendegúz Dezső Bak ◽  
Tamás Kalmár-Nagy

Cluster growth models are utilized for a wide range of scientific and engineering applications, including modeling epidemics and the dynamics of liquid propagation in porous media. Invasion percolation is a stochastic branching process in which a network of sites is getting occupied that leads to the formation of clusters (group of interconnected, occupied sites). The occupation of sites is governed by their resistance distribution; the invasion annexes the sites with the least resistance. An iterative cluster growth model is considered for computing the expected size and perimeter of the growing cluster. A necessary ingredient of the model is the description of the mean perimeter as the function of the cluster size. We propose such a relationship for the site square lattice. The proposed model exhibits (by design) the expected phase transition of percolation models, i.e., it diverges at the percolation threshold p c . We describe an application for the porosimetry percolation model. The calculations of the cluster growth model compare well with simulation results.


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 234 ◽  
Author(s):  
Anam Luqman ◽  
Muhammad Akram ◽  
Florentin Smarandache

A complex neutrosophic set is a useful model to handle indeterminate situations with a periodic nature. This is characterized by truth, indeterminacy, and falsity degrees which are the combination of real-valued amplitude terms and complex-valued phase terms. Hypergraphs are objects that enable us to dig out invisible connections between the underlying structures of complex systems such as those leading to sustainable development. In this paper, we apply the most fruitful concept of complex neutrosophic sets to theory of hypergraphs. We define complex neutrosophic hypergraphs and discuss their certain properties including lower truncation, upper truncation, and transition levels. Furthermore, we define T-related complex neutrosophic hypergraphs and properties of minimal transversals of complex neutrosophic hypergraphs. Finally, we represent the modeling of certain social networks with intersecting communities through the score functions and choice values of complex neutrosophic hypergraphs. We also give a brief comparison of our proposed model with other existing models.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 84
Author(s):  
Minkyung Kwak ◽  
Youngho Cho

In botnets, a bot master regularly sends command and control messages (C & C messages) to bots for various purposes, such as ordering its commands to bots and collecting critical data from bots. Although such C & C messages can be encrypted by cryptographic methods to hide them, existing botnet detection mechanisms could detect the existence of botnets by capturing suspicious network traffics between the bot master (or the C & C server) and numerous bots. Recently, steganography-based botnets (stego-botnets) have emerged to make C & C communication traffics look normal to botnet detection systems. In stego-botnets, every C & C message is embedded in a multimedia file, such as an image file by using steganography techniques and shared in Social Network Service (SNS) websites (such as Facebook) or online messengers (such as WeChat or KakaoTalk). Consequently, traditional botnet detection systems without steganography detection methods cannot detect them. Meanwhile, according to our survey, we observed that existing studies on the steganography botnet are limited to use only image steganography techniques, although the video steganography method has some obvious advantages over the image steganography method. By this motivation, in this paper, we study a video steganography-based botnet in Social Network Service (SNS) platforms. We first propose a video steganography botnet model based on SNS messengers. In addition, we design a new payload approach-based video steganography method (DECM: Divide-Embed-Component Method) that can embed much more secret data than existing tools by using two open tools VirtualDub and Stegano. We show that our proposed model can be implemented in the Telegram SNS messenger and conduct extensive experiments by comparing our proposed model with DECM with an existing image steganography-based botnet in terms of C & C communication efficiency and undetectability.


Author(s):  
Akiyo Nadamoto ◽  
Eiji Aramaki ◽  
Takeshi Abekawa ◽  
Yohei Murakami

Internet-based social network services (SNSs) have grown increasingly popular and are producing a great amount of content. Multiple users freely post their comments in SNS threads, and extracting the gist of these comments can be difficult due to their complicated dialog. In this paper, the authors propose a system that explores this concept of the gist of an SNS thread by comparing it with Wikipedia. The granularity of information in an SNS thread differs from that in Wikipedia articles, which implies that the information in a thread may be related to different articles on Wikipedia. The authors extract target articles on Wikipedia based on its link graph. When an SNS thread is compared with Wikipedia, the focus is on the table of contents (TOC) of the relevant Wikipedia articles. The system uses a proposed coverage degree to compare the comments in a thread with the information in the TOC. If the coverage degree is higher, the Wikipedia paragraph becomes the gist of the thread.


Author(s):  
Muhammad Hassan ◽  
Yan Wang ◽  
Wei Pang ◽  
Di Wang ◽  
Daixi Li ◽  
...  

AbstractShoeprints contain valuable information for tracing evidence in forensic scenes, and they need to be generated into cleaned, sharp, and high-fidelity images. Most of the acquired shoeprints are found with low quality and/or in distorted forms. The high-fidelity shoeprint generation is of great significance in forensic science. A wide range of deep learning models has been suggested for super-resolution, being either generalized approaches or application specific. Considering the crucial challenges in shoeprint based processing and lacking specific algorithms, we proposed a deep learning based GUV-Net model for high-fidelity shoeprint generation. GUV-Net imitates learning features from VAE, U-Net, and GAN network models with special treatment of absent ground truth shoeprints. GUV-Net encodes efficient probabilistic distributions in the latent space and decodes variants of samples together with passed key features. GUV-Net forwards the learned samples to a refinement-unit proceeded to the generation of high-fidelity output. The refinement-unit receives low-level features from the decoding module at distinct levels. Furthermore, the refinement process is made more efficient by inverse-encoded in high dimensional space through a parallel inverse encoding network. The objective functions at different levels enable the model to efficiently optimize the parameters by mapping a low quality image to a high-fidelity one by maintaining salient features which are important to forensics. Finally, the performance of the proposed model is evaluated against state-of-the-art super-resolution network models.


Sign in / Sign up

Export Citation Format

Share Document