Research and Development on Information and Communication Technology
Latest Publications


TOTAL DOCUMENTS

122
(FIVE YEARS 33)

H-INDEX

1
(FIVE YEARS 0)

Published By Mic Journal Of Information And Communications Technology

1859-3534, 1859-3534

Author(s):  
Nguyễn Minh Thường

Sphere Decoding (SD) algorithms can achieve a quasi-maximum likelihood (ML) decoder performance over Gaussian multiple input-multiple output (MIMO) channels with much lower complexity compared to the exhaustive search method. The SD algorithm is based on a closest lattice point search over a limited search space (hypersphere). On top of that, QR-decomposition simplifies the SD linear system's matrix to be an upper triangle matrix. The solution solver then is done by searching in the exponentially expanding search tree, started from the top with only a single node then increases by M times every level (in $N_T\times N_R$ MIMO system). Fortunately, the SD algorithm shrinks its hypersphere at every level (once the level node is determined) and phases out a vast number of the candidates, remaining only specific valid nodes in the current considered level. In this work, we proposed the statistical approach for evaluating the adequate number of valid search nodes at every level of the search tree, aiming to optimize the overall computational workload. We use a massive number of inputs patterns and extensive simulation to project the number of remaining valid nodes during the searching process. The simulations have been conducted for $4\times 4$ and $8\times 8$ MIMO systems. Our results indicate that for a particular targeted BER, choosing an appropriate sphere radius is essentially important and the number of necessary calculations increases only at the middle layer and can be generically quantified regardless of the system characteristics. This finding is beneficial for the hardware implementation of the SD, where the number of computational units has to be fixed in advance.


Author(s):  
Thanh Le ◽  
Hoang Nguyen ◽  
Bac Le

Link prediction in knowledge graphs gradually plays an essential role in the field of research and application. Through detecting latent connections, we can refine the knowledge in the graph, discover interesting relationships, answer user questions or make item suggestions. In this paper, we conduct a survey of the methods that are currently achieving good results in link prediction. Specially, we perform surveys on both static and temporal graphs. First, we divide the algorithms into groups based on the characteristic representation of entities and relations. After that, we describe the original idea and analyze the key improvements. In each group, comparisons and investigation on the pros and cons of each method as well as their applications are made. Based on that, the correlation of the two graph types in link prediction is drawn. Finally, from the overview of the link prediction problem, we propose some directions to improve the models for future studies.


Author(s):  
Thanh Hiên Nguyễn ◽  
Thi Tran ◽  
Hieu Duong ◽  
Hoai Tran

Runoff prediction has recently become an essential task with respect to assessing the impact of climate change to people’s livelihoods and production. However, the runoff time series always exhibits nonlinear and non-stationary features, which makes it very difficult to be accurately predicted. Machine learning have been recently proved to be a powerful tool in helping society adapt to a changing climate and its subfield, deep learning, showed the power in approximate nonlinear functions. In this study, we propose a method based on deep belief networks (DBN) for runoff prediction. In order to evaluate the proposed method, we collected runoff datasets from Srepok and Dak Nong rivers located in mountain regions of the Central Highland of Vietnam in the periods of 2001-2007 at Dak Nong hydrology station and 1990-2011 at Buon Don hydrology station, respectively. Experimental results show that DBN outperforms, respectively, LSTM, BiLSTM, multi-layer perceptron (MLP) trained by particle swarm optimization (PSO) and MLP trained by stochastic gradient descent (SGD) in which gradients are computed using the backpropagation (BP) procedure. The results also confirm that DBN is suitable to employ for the task of runoff prediction.


Author(s):  
Thanh Vi Nguyen ◽  
Thế Cường Nguyễn

n binary classification problems, two classes of data seem tobe different from each other. It is expected to be more complicated dueto the number of data points of clusters in each class also be different.Traditional algorithms as Support Vector Machine (SVM), Twin Support Vector Machine (TSVM), or Least Square Twin Support VectorMachine (LSTSVM) cannot sufficiently exploit information about thenumber of data points in each cluster of the data. Which may be effectto the accuracy of classification problems. In this paper, we proposes anew Improved Least Square - Support Vector Machine (called ILS-SVM)for binary classification problems with a class-vs-clusters strategy. Experimental results show that the ILS-SVM training time is faster thanthat of TSVM, and the ILS-SVM accuracy is better than LSTSVM andTSVM in most cases.


Author(s):  
Thanh Huan Phan ◽  
Hoài Bắc Lê

In 1993, Agrawal et al. proposed the first algorithm for mining traditional frequent itemset on binarytransactional database with unweighted items - This algorithmis essential in finding hindden relationships among items inyour data. Until 1998, with the development of various typesof transactional database - some researchers have proposed afrequent itemsets mining algorithms on transactional databasewith weighted items (the importance/meaning/value of itemsis different) - It provides more pieces of knowledge thantraditional frequent itemsets mining. In this article, the authors present a survey of frequent itemsets mining algorithmson transactional database with weighted items over the pasttwenty years. This research helps researchers to choose theright technical solution when it comes to scale up in big datamining. Finally, the authors give their recommendations anddirections for their future research.


Author(s):  
Minh Thanh Tạ

This paper proposes a new watermarking method for digital image by composing the DWT-QIM based embedding with visual secret sharing (VSS) method. Firstly, the watermark image is separated into $n$ shares by using the $k-out-of-n$ method, called $(k,n)$ visual secret sharing. One of share is employed in order to embed into the original image for copyright protection. Another $(n-1)$ of shares are registered with Vietnam Copyright Department. When the dispute happens, the verifier can extract the watermark information from the watermarked image, then, decode it with $(k-1)$ shares chosen from $(n-1)$ shares to achieve the copyright information. Our experimental results show that our proposed method works efficiently on the digital images.


Author(s):  
Hai-Hong Phan

Detecting and identifying the table structure is an important issue in document digitization. Although there have been many great strides based on current deep learning techniques, table structure identification is still a difficult and difficult problem, especially when solving the problem of digitizing text in practice. The paper proposes a solution to digitize table documents based on the Cascade R-CNN HRNet network to detect, classify tables and integrate image processing algorithms to improve table data identification results. The proposed algorithm proved effective on real data - the hydrometeorological station record book contains tables including simple and complex structures tables with over 98% accuracy.


Author(s):  
Long Nguyen ◽  
Dinh Nguyen Duc ◽  
Hoai Nguyen Xuan

In the real world, multi-objective problems(MOPs) are relatively common in optimization in the areasof design, planning, decision support... In fact, problemsinclude two or many objectives, there is a class of problemscalled expensive problems that are problems with complexmathematical models, large computational costs,... Theycan not be solved by normal techniques, they are usually tobe solved with techniques such as simulation, decomposing,problem transformation. In particular, using a surrogatemodel with Kriging, neuron networks techniques in combination with an evolutionary algorithm is a subtle choice,with many positive results, being studied and applied inpractice. However, the use of a surrogate model withKriging, neuron networks combining selection strategy,sampling... can reduce the robustness of the algorithmsduring the search. This paper analyzes the issues affectingthe robustness of the multi-objective evolutionary algorithms (MOEAs) using surrogate models and suggests theuse of a guidance technique to increase the robustness ofthe algorithm, through analysis, experiment and results arecompetitive and effective to improve the quality of MOEAsusing a surrogate model to solve expensive problems.


Author(s):  
Lê Văn Hùng

3D hand pose estimation from egocentric vision is an important study in the construction of assistance systems and modeling of robot hand in robotics. In this paper, we propose a complete method for estimating 3D hand posefrom the complex scene data obtained from the egocentric sensor. In which we propose a simple yet highly efficient pre-processing step for hand segmentation. In the estimation process, we used the Hand PointNet (HPN), V2V-PoseNet(V2V), Point-to-Point Regression PointNet (PtoP) for finetuning to estimate the 3D hand pose from the collected data obtained from the egocentric sensor, such as CVRA, FPHA (First-Person Hand Action) datasets. HPN, V2V, PtoP are thedeep networks/Convolutional Neural Networks (CNNs) for estimating 3D hand pose that uses the point cloud data of the hand. We evaluate the estimation results using the preprocessing step and do not use the pre-processing step to see the effectiveness of the proposed method. The results show that 3D distance error is increased many times compared to estimates on the hand datasets are not obstructed (the hand data obtained from surveillance cameras, are viewed from top view, front view, sides view) such as MSRA, NYU, ICVL datasets. The results are quantified, analyzed, shown on the point cloud data of CVAR dataset and projected on the color image of FPHA dataset.


Author(s):  
Linh Manh Pham

Many domains of human life are more and moreimpacted by applications of the Internet of Things (i.e., IoT).The embedded devices produce masses of data day after dayrequiring a strong network infrastructure. The inclusion ofmessaging protocols like MQTT is important to ensure as fewerrors as possible in sending millions of IoT messages. Thisprotocol is a great component of the IoT universe due to itslightweight design and low power consumption. DistributedMQTT systems are typically needed in actual applicationenvironments because centralized MQTT methods cannotaccommodate a massive volume of data. Although beingscalable decentralized MQTT systems, they are not suited totraffic workload variability. IoT service providers may incurexpense because the computing resources are overestimated.This points to the need for a new approach to adapt workloadfluctuation. Through proposing a modular MQTT framework,this article provides such an elasticity approach. In order toguarantee elasticity of MQTT server cluster while maintainingintact IoT implementation, the MQTT framework used offthe-shelf components. The elasticity feature of our frameworkis verified by various experiments.


Sign in / Sign up

Export Citation Format

Share Document