International Journal of Computers Communications & Control
Latest Publications


TOTAL DOCUMENTS

999
(FIVE YEARS 178)

H-INDEX

25
(FIVE YEARS 7)

Published By Agora University Of Oradea

1841-9844, 1841-9836

Author(s):  
Ahmad Hakimi Bin Ahmad Sa'ahiry ◽  
Abdul Halim Ismail ◽  
Latifah Munirah Kamaruddin ◽  
Mohd Sani Mohamad Hashim ◽  
Muhamad Safwan Muhamad Azmi ◽  
...  

Indoor positioning system has been an essential work to substitute the Global Positioning System (GPS). GPS utilizing Global Navigation Satellite Systems (GNSS) cannot provide an accurate positioning in the indoor due to the multipath effect and shadow fading. Fingerprinting method with Wi-Fi technology is a promising system to solve this issue. However, there are several problems with the fingerprinting method. The fingerprinting database collected has different sample sizes where the previous researcher does not indicate any standard for the sample size to be used. In this paper, the effect of the sample sizes in fingerprinting database for Wi-Fi technology has been discussed deeply. The statistical analyzation for different sample sizes has been analyzed. Furthermore, two methods which are K- Nearest Neighbor (KNN) and Deep Neural Network (DNN) are being used to examine the effect of the sample sizes in term of accuracy and distance error. The discussion in this paper will contribute to the better sample size selection depending on the method taken by the user. The result shows that sample sizes are an important metrics in developing the indoor positioning system as it effects the result of the location estimation.


Author(s):  
Ping Zhang ◽  
Jia-Yao Yang ◽  
Hao Zhu ◽  
Yue-Jie Hou ◽  
Yi Liu ◽  
...  

In the era of artificial intelligence, machine learning methods are successfully used in various fields. Machine learning has attracted extensive attention from investors in the financial market, especially in stock price prediction. However, one argument for the machine learning methods used in stock price prediction is that they are black-box models which are difficult to interpret. In this paper, we focus on the future stock price prediction with the historical stock price by machine learning and deep learning methods, such as support vector machine (SVM), random forest (RF), Bayesian classifier (BC), decision tree (DT), multilayer perceptron (MLP), convolutional neural network (CNN), bi-directional long-short term memory (BiLSTM), the embedded CNN, and the embedded BiLSTM. Firstly, we manually design several financial time series where the future price correlates with the historical stock prices in pre-designed modes, namely the curve-shape-feature (CSF) and the non-curve-shape-feature (NCSF) modes. In the CSF mode, the future prices can be extracted from the curve shapes of the historical stock prices. Conversely, in the NCSF mode, they can’t. Secondly, we apply various algorithms to those pre-designed and real financial time series. We find that the existing machine learning and deep learning algorithms fail in stock price prediction because in the real financial time series, less information of future prices is contained in the CSF mode, and perhaps more information is contained in the NCSF. Various machine learning and deep learning algorithms are good at handling the CSF in historical data, which are successfully applied in image recognition and natural language processing. However, they are inappropriate for stock price prediction on account of the NCSF. Therefore, accurate stock price prediction is the key to successful investment, and new machine learning algorithms handling the NCSF series are needed.


Author(s):  
Quan Xiao ◽  
Xia Li

Learners’ satisfaction plays a critical role in the success of online learning platform. Many factors that affect online learning satisfaction have been addressed by previous studies. However, the mechanisms by which these factors are associated with online learning satisfaction are not sufficiently clear. Moreover, the difference in the antecedents of online learning satisfaction between two use contexts- Mobile context and PC context, was rarely examined. Based on the Stimulus-Organism-Response (S-O-R) framework, we investigate the key factors (self-efficacy, social interaction, platform quality, teacher’s expertise) affecting flow and highlights its role in online learning satisfaction, which is empirically tested through an online survey of 333 online learners. Results show that self-efficacy, teacher’s expertise, platform quality, and social interaction positively affect online learning satisfaction through the mediation of flow. Use contexts not only moderate the relationship between flow and online learning satisfaction, but also between social interaction, platform quality, teacher’s expertise, and flow. These new findings expand educators with ways to increase flow, add to knowledge about the relationship between flow and online learning satisfaction and provide references for online learning platforms to enhance learners’ online learning satisfaction under multiple-version affordances.


Author(s):  
Simona Dzitac ◽  
Horea Oros ◽  
Dan Deac ◽  
Sorin Nădăban

In this paper we have presented, firstly, an evolution of the concept of fuzzy normed linear spaces, different definitions, approaches as well as generalizations. A special section is dedicated to fuzzy Banach spaces. In the case of fuzzy normed linear spaces, researchers have been working, until now, with a definition of completeness inspired by M. Grabiec’s work in the context of fuzzy metric spaces. We propose another definition and we prove that it is much more adequate, inspired by the work of A.George and P. Veeramani. Finally, some important results in fuzzy fixed point theory were highlighted.


Author(s):  
Yassine Sabri ◽  
Aouad Siham

Multi-area and multi-faceted remote sensing (SAR) datasets are widely used due to the increasing demand for accurate and up-to-date information on resources and the environment for regional and global monitoring. In general, the processing of RS data involves a complex multi-step processing sequence that includes several independent processing steps depending on the type of RS application. The processing of RS data for regional disaster and environmental monitoring is recognized as computationally and data demanding.Recently, by combining cloud computing and HPC technology, we propose a method to efficiently solve these problems by searching for a large-scale RS data processing system suitable for various applications. Real-time on-demand service. The ubiquitous, elastic, and high-level transparency of the cloud computing model makes it possible to run massive RS data management and data processing monitoring dynamic environments in any cloud. via the web interface. Hilbert-based data indexing methods are used to optimally query and access RS images, RS data products, and intermediate data. The core of the cloud service provides a parallel file system of large RS data and an interface for accessing RS data from time to time to improve localization of the data. It collects data and optimizes I/O performance. Our experimental analysis demonstrated the effectiveness of our method platform.


Author(s):  
Yoon-Hwan Kim ◽  
Dae-Young Lee ◽  
Sang-Hyun Bae ◽  
Tae Yeun Kim

Mobile traffic, which has increased significantly with the emergence of Fourth generation longterm evolution (4G-LTE) communications and advances in video streaming services, is still currently increasing at an incredible pace. Fifth-generation (5G) mobile communication systems, which were developed to deal with such a drastic increase in mobile traffic, aim to achieve ultra-high-speed data transmission, low latency, and the accommodation of many more connected devices compared to 4G-LTE systems. 5G communication uses high-frequency bandwidth to implement these features, which leads to an inevitable drawback of a high path loss. In order to overcome this disadvantage, small cell technology was developed, and is defined as small, low-power base stations that can extend the network coverage and solve the shadow area problem. Although small cell technology has these advantages, different problems, such as the effects of interference due to the deployment of a large number of small cells and the differences in devices accessing the network, need to be solved. To do so, it is necessary to develop an algorithm for a service method. However, general algorithms have difficulties in responding to the diverse environment of mobile communication systems, such as sudden increase in traffic in certain areas or sudden changes in the mobile population, and machine learning technology has been applied to solve this problem. This study employs a machine learning algorithm to determine small cell connections. In addition, a 5G macro system, the application of small cells, and the application of machine learning algorithms are compared to determine the performance improvement in the machine learning algorithm. Moreover, Support Vector Machine (SVM), Logistic Regression and Decision Tree algorithm are employed to show a training method that uses basic training data and a small cell on-off method, and the performance enhancement is verified based on this method.


Author(s):  
M. Kamaladevi ◽  
V. Venkatraman

In recent years, imbalanced data classification are utilized in several domains including, detecting fraudulent activities in banking sector, disease prediction in healthcare sector and so on. To solve the Imbalanced classification problem at data level, strategy such as undersampling or oversampling are widely used. Sampling technique pose a challenge of significant information loss. The proposed method involves two processes namely, undersampling and classification. First, undersampling is performed by means of Tversky Similarity Indexive Regression model. Here, regression along with the Tversky similarity index is used in analyzing the relationship between two instances from the dataset. Next, Gaussian Kernelized Decision stump AdaBoosting is used for classifying the instances into two classes. Here, the root node in the Decision Stump takes a decision on the basis of the Gaussian Kernel function, considering average of neighboring points accordingly the results is obtained at the leaf node. Weights are also adjusted to minimizing the training errors occurring during classification to find the best classifier. Experimental assessment is performed with two different imbalanced dataset (Pima Indian diabetes and Hepatitis dataset). Various performance metrics such as precision, recall, AUC under ROC score and F1-score are compared with the existing undersampling methods. Experimental results showed that prediction accuracy of minority class has improved and therefore minimizing false positive and false negative.


Author(s):  
S Imavathy ◽  
M. Chinnadurai

Now a days the pattern recognition is the major challenge in the field of data mining. The researchers focus on using data mining for wide variety of applications like market basket analysis, advertisement, and medical field etc., Here the transcriptional database is used for all the conventional algorithms, which is based on daily usage of object and/or performance of patients. Here the proposed research work uses sequential pattern mining approach using classification technique of Threshold based Support Vector Machine learning (T-SVM) algorithm. The pattern mining is to give the variable according to the user’s interest by statistical model. Here this proposed research work is used to analysis the gene sequence datasets. Further, the T-SVM technique is used to classify the dataset based on sequential pattern mining approach. Especially, the threshold-based model is used for predicting the upcoming state of interest by sequential patterns. Because this makes deeper understanding about sequential input data and classify the result by providing threshold values. Therefore, the proposed method is efficient than the conventional method by getting the value of achievable classification accuracy, precision, False Positive rate, True Positive rate and it also reduces operating time. This proposed model is performed in MATLAB in the adaptation of 2018a.


Author(s):  
Ioan Dumitrache ◽  
Simona Iuliana Caramihai ◽  
Dragos Constantin Popescu ◽  
Mihnea Alexandru Moisescu ◽  
Ioan Stefan Sacala

There are currently certain categories of manufacturing enterprises whose structure, organization and operating context have an extremely high degree of complexity, especially due to the way in which their various components interact and influence each other. For them, a series of paradigms have been developed, including intelligent manufacturing, smart manufacturing, cognitive manufacturing; which are based equally on information and knowledge management, management and interpretation of data flows and problem solving approaches. This work presents a new vision regarding the evolution of the future enterprise based on concepts and attributes acquired from the field of biology. Our approach addresses in a systemic manner the structural, functional, and behavioral aspects of the enterprise, seen as a complex dynamic system. In this article we are proposing an architecture and management methodology based on the human brain, where the problem solving is achieved by Perception – Memory – Learning and Behavior Generation mechanisms. In order to support the design of such an architecture and to allow a faster learning process, a software modeling and simulation platform was developed and is briefly presented.


Author(s):  
Francisco José Pérez ◽  
Alberto García ◽  
Víctor J. Garrido ◽  
Manuel Esteve ◽  
Marcelo Zambrano

Nowadays, the free movement of people and goods within the European Union is one of the topical issues. Each member state and border practitioner exploits its own set of assets in their goal of border surveillance and control. States have invested significantly in these assets and infrastructures necessary to manage and control the transit in the border areas. As new capabilities and assets become available and as current Command and Control (C2) systems become older, border control practitioners are faced with the increasing challenge of how to integrate new assets, command and control all of them in a coordinated and coherent way without having to invest in a completely new C2 systems built from the ground up. Therefore, and bearing in mind that the systems already developed up to date are very old and are not framed in a global standard data model, it has been identified, on one side the need to define a platform that allows to interact with multiple UxVs (land, sea and air), and on the other, unify all data models so that it can globalize and generate a much more concise analysis of what happens in places of conflict.


Sign in / Sign up

Export Citation Format

Share Document