VFAST Transactions on Software Engineering
Latest Publications


TOTAL DOCUMENTS

45
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 0)

Published By Vfast

2309-3978, 2411-6246

Author(s):  
NASHWAN ALROMEMA ◽  
FAHAD ALOTAIBI

Time-varying data models store data related to time instances and offer different types of timestamping. These modeling approaches are considered as one of the most important parts of many database applications like metrological, banking, biomedical, accounting, scheduling, reservation systems, sensor based systems, real-time monitoring applications and applications involving maintenance of huge records. This research work introduces the state-of-the-art modeling approaches of Time-varying data. Furthermore we will show how to represent a running example using different approaches and give a comparison study of storage, and the ease of use of each model.


Knowing exact number of clusters in a digital image significantly facilitates in precisely clustering an image. This paper proposes a new technique for extracting exact number of clusters from grey scale images. It analyzes the contents of the input image and adaptively reserves one distinct cluster for one distinct grey value. The total count of the grey values found in an image determines the exact number of clusters. Based on the contents of image, this number of clusters keeps on changing from image to image. After obtaining this number, it is given as an input to Gaussian Mixture Model (GMM) which clusters the image.GMM works with finite number of clusters and forms mixture of various spectral densities contained in that image. The proposed method facilitates GMM to adapt itself according to the changing number of clusters. Therefore, the proposed model along with the inclusion of GMM, is named as Adaptive Finite Gaussian Mixture Model (AFGMM). The clustering performance of AFGMM is evaluated through Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). Both of these performance measuring methods confirmed that exact number of clusters is essentially important for reliably analyzing an image.


Accurate feature detection during Image retrieval is important, data retrieves through image retrieval methods like CBIR and CBIR higher dimension data also need storage and access through different methods, content-based Image retrieval uses query like query by feature and query by example. More focus has made on accurate feature detection because need accurate feature retrieval. In simple words objectives are, to develop methods with sequence to classify features with normalization for efficient image retrieval from bulk dataset and also to improve method for local and global feature retrieval with automatic feature detection along accuracy. After study of different detection-based system, a methodology has been proposed which improves retrieval based on feature detection and feature detection had been improve with combination DWT+PCA+KSVM (polygon kernel +RBF kernel + Linear Kernel).


With the rapidly advancing technology of today, exchange of information and data is a very pertinent matter. The world has just recently witnessed the effects of information leakage through the issue of WikiLeaks. There are huge amounts of data being shared over different platforms nowadays. Global System for Mobile Communication (GSM) is one of the most reliable platforms known to and used by almost all people in the world for text as well as voice communication. With the tools like Android Studio and NetBeans available, it is now possible to encrypt the text that has to be sent over the GSM, so that it can be decrypted at the other end of the communication path. However, the encryption and decryption of voice being transmitted over the GSM network still remains a question. In the domain of real time voice encryption, much of the work being carried out pertains to the voice being exchanged through the Internet Protocol. As compared to the Voice over Internet Protocol (VoIP), voice over the GSM network has not seen much research work related to its security aspects. The purpose of this paper is to document the results of a project aimed at developing a platform for mobile phones in order to communicate over the GSM network in a secure manner. The most suitable method for achieving the above mentioned objective is to use an open source Operating System (OS), so that the source code is easily accessible and usable. In this paper, the Android OS will be under discussion, which is compatible with all the Android mobile phones. In this way, the maximum number of mobile phone users can be benefitted because Android cell phones are being widely used nowadays. The use of cryptographic algorithms for securing the voice communication over the GSM network is also a part of this paper. The work revolves around the Java programming language since the Android application development has been carried out in Java through the use of Android Studio. Also, NetBeans has been employed for developing algorithms for voice encryption


Author(s):  
◽  
◽  
◽  
◽  

With the growth of technology, people and companies are more relying on software systems. For that, we need a product/software that is trustworthy, reliable, and economical. It should be maintainable, dependable and useable. If the software is developed with much accuracy, everything is being done by it as planned and the software is set to the market then the success rate will be high. But if there is any bug in the software then not only the software will fail but it will also affect the organizations that are responsible for making it. So the failure of software also has a great impact on the organization. In this research, we are going to present a detailed and critical analysis of all those causes due to which the software fails and the factors that hinder in a project success. We will study the existing software development processes and also analyze how they can helpful in reducing these causes.


Author(s):  
◽  
◽  

There are a number of recommendation systems available on the internet for the help of jobseekers. These systems only generate job recommendations for people on the basis of input entered by user. The problem observed in Pakistani people is they are not clear in which field they should start or switch working. Before searching and applying for a job, one should be clear about his/her profession and important skills regarding selected profession. Based on above issues, there is a need to design such a system that can overcome the problem of profession selection and skills suggestions so that it can be easy for a jobseeker to apply for a specific job. In this research, the problem which is discussed above is resolved by proposing a model by using Association Rules Mining, a data mining technique. In this model, professions are recommended to job seekers by matching the profile of applicant or job seeker with those persons who have same profile like educational background, professional skills and the type of jobs which they are doing. The data collected for this research itself is a major contribution as we collected it from different sources. We will make this data publically available for others so that they can use for further research.


Author(s):  
◽  
◽  

Requirement Analysis is an important phase in software development. Failure or success of software product depends upon requirement analysis phases. In this paper, a detailed critical analysis was conducted to find out the expected issues behind software project’s failure. It has been found that several issues are associated with this phase such as: customer ambiguity, requirement changing during the project etc. The study was conducted by questionnaires to figure out these issues in the Pakistani software industry.


Author(s):  
◽  
◽  

Heart disease is increasing rapidly due to number of reasons. If we predict cardiac arrest (dangerous conditions of heart) in the early stages, it will be very helpful to cured this disease. Although doctors and health centres collect data daily, but mostly are not using machine learning and pattern matching techniques to extract the knowledge that can be very useful in prediction. Bioinformatics is the real world application of machine learning to extract patterns from the datasets using several data mining techniques. In this research paper, data and attributes are taken from the UCI repository. Attribute extraction is very effective in mining information for the prediction. By utilizing this, various patterns can be derived to predict the heart disease earlier. In this paper, we enlighten the number of techniques in Artificial Neural Network (ANN). The accuracy is calculated and visualized such as ANN gives 94.7% but with Principle Component Analysis (PCA) accuracy rate improve to 97.7%.


One type of signal processing is Image processing in which the input used as an image and the output might also be an image or a set of features that are related to the image. Images are handled as a 2D signal using image processing methods. For the fast processing of images, several architectures are suitable for different responsibilities in the image processing practices are important. Various architectures have been used to resolve the high communication problem in image processing systems. In this paper, we will yield a detailed review about these image processing architectures that are commonly used for the purpose of getting higher image quality. Architectures discussed are FPGA, Focal plane SIMPil, SURE engine. At the end, we will also present the comparative study of MSIMD architecture that will facilitate to understand best one.


In the age of emerging technologies, the amount of data is increasing very rapidly. Due to massive increase of data the level of computations are increasing. Computer executes instructions sequentially. But the time has now changed and innovation has been advanced. We are currently managing gigantic data centers that perform billions of executions on consistent schedule. Truth be- hold, if we dive deep into the processor engineering and mechanism, even a successive machine works parallel. Parallel computing is growing faster as a substitute of distributing computing. The performance to functionality ratio of parallel systems is high. Also, the I/O usage of parallel systems is lower because of ability to perform all operations simultaneously. On the other hand, the performance to functionality ratio of distributed systems is low. The I/O usage of distributed systems is higher because of incapability to perform all operations simultaneously. In this paper, an overview of distributed and parallel computing is described. The basic concept of these two computing is discussed. In addition to this, pros and cons of distributed and parallel computing models are described. Through many aspects, we can conclude that parallel systems are better than distributed systems.


Sign in / Sign up

Export Citation Format

Share Document