International Journal of Advanced Pervasive and Ubiquitous Computing
Latest Publications


TOTAL DOCUMENTS

214
(FIVE YEARS 16)

H-INDEX

6
(FIVE YEARS 1)

Published By Igi Global

1937-9668, 1937-965x

Author(s):  
Song Ji ◽  
Weifang Zhai ◽  
Yiran Jiang

Workflow technology is the core technology to realize business process modeling, process operation, monitoring and management, and ultimately realize business process automation. Workflow-based office automation system can separate code writing and the operation mode. When business processes need to be changed, there is no need to modify the program. Users only need to customize the workflow through the visual process customization mode. The workflow engine is the core of the whole workflow management system and the control center of the whole system. This article designs a workflow engine based on a relational structure, including the design of workflow engine class, functional component, interface and database. Finally, a flexible office automation system with customizable business processes is implemented.


Author(s):  
Rakshith M.D. Hegde ◽  
Harish H Kenchannavar

The Smart Home is an environment that enables the resident to interact with home appliances which provide resident intended services. In recent years, predicting resident intention based on the contextual modalities like activity, speech, emotion, object affordances, and physiological parameters have increased importance in the field of pervasive computing. Contextual modality is the feature through which resident interacts with the home appliances like TVs, lights, doors, fans, etc. These modalities assist the appliances in predicting the resident intentions making them recommend resident intended services like opening and closing doors, turning on and off televisions, lights, and fans. Resident-appliance interaction can be achieved by embedding artificial intelligence-based machine learning algorithms into the appliances. Recent research works on the contextual modalities and associated machine learning algorithms which are required to build resident intention prediction system have been surveyed in this article. A classification taxonomy of contextual modalities is also discussed.


Author(s):  
Praveen Shivashankrappa Challagidad ◽  
Mahantesh N. Birje

Data loss occurs due to crashing, correlated failure, logical failure, power outages and security threats. Several techniques (e.g. NoBackup, WARBackup and LocalRecovery) are being used to recover data locally. And, strongly consistent Cloud services (SCCS) must provide good performance and high availability. However, conventional strong consistency replication methods have the limitation of availability of replicated services when recovering huge amount of data across wide area links. There is a need for remote recovery mechanisms for high availability of service/data, because distributed nature of cloud infrastructures. To address these issues, the article proposes a hierarchical system architecture for replication across a data center, and employs the backward atomic backup recovery technique (BABRT) for local recovery and remote recovery for high availability of the cloud services/data. A mathematical model for BABRT is described. Simulation results show that BABRT reduces the storage consumption, recovery time, window of vulnerability and failure rates, compared to other recovery models.


Author(s):  
Vishal Passricha ◽  
Shubhanshi Singhal

CNNs are playing a vital role in the field of automatic speech recognition. Most CNNs employ a softmax activation layer to minimize cross-entropy loss. This layer generates the posterior probability in object classification tasks. SVMs are also offering promising results in the field of ASR. In this article, two different approaches: CNNs and SVMs, are combined together to propose a new hybrid architecture. This model replaces the softmax layer, i.e. the last layer of CNN by SVMs to effectively deal with high dimensional features. This model should be interpreted as a special form of structured SVM and named the Convolutional Neural SVM. (CNSVM). CNSVMs incorporate the characteristics of both models which CNNs learn features from the speech signal and SVMs classify these features into corresponding text. The parameters of CNNs and SVMs are trained jointly using a sequence level max-margin and sMBR criterion. The performance achieved by CNSVM on Hindi and Punjabi speech corpus for word error rate is 13.43% and 15.86%, respectively, which is a significant improvement on CNNs.


Author(s):  
Vikram Singh

Modern information systems are expected to assist users with diverse goals, via exploiting the topical dimension (‘what' the user is searching for) of information needs. However, the intent dimension (‘why' the user is searching) has preferred relatively lesser for the same intention. Traditionally, the intent is an ‘immediate reason, purpose, or goal' that motivates the user search, and captured in search contexts (Pre-search, In-search, Pro-Search), an ideal information system would be able to use. This article proposes a novel intent estimation strategy; based on the intuition that captured intent, and proactively extracts likely results. The captured ‘Pre-search' context adapts query term proximities within matched results beside document-term statistics and pseudo-relevance feedback with user-relevance feedback for In-search. The assessment asserts the superior performance of the proposed strategy over the equivalent on tradeoffs, e.g., novelty, diversity (coverage, topicality), retrieval (precision, recall, F-measure) and exploitation vs exploration.


Author(s):  
Kamalendu Pal

The use of Radio Frequency Identification (RFID) technology has attracted a huge attention from the supply chain business community. This is due to the use of RFID technology wide range of applications in the fields of logistics and supply chain management. This paper presents a brief overview of a simple industrial RFID system and then describe the basic concept of tag collision problem. Despite many useful applications, the RFID tag collision creates a major problem for fast tag identification process. There are different algorithmic solutions available to overcome tag collision problem in industrial supply chains. Based on binary search algorithm (BSA) of dynamic and backtracking, two variations of binary anti-collision search algorithms for tag identification processes are described in this paper. Simulation-based experimental results on the performance of these algorithms are evaluated when handling multiple RFID tags simultaneously. The backtracking binary search algorithm has obvious advantages in terms of tag identification process compared to the other two algorithms.


Author(s):  
Lokesh B. Bhajantri ◽  
Tabassum N. Mujawar

Cloud computing is the most prevailing paradigm, which provides computing resources and services over the Internet. Due to immense development in services provided by cloud computing, the trend to share large-scale and confidential data on cloud has been increased. Though cloud computing provides many benefits, ensuring security of the data stored in cloud is the biggest challenge. The security concern about the data becomes main barrier for adoption of cloud. One of the important security aspects is fine grained access control mechanism. The most widely used and efficient access control scheme for cloud computing is Attribute Based Encryption (ABE). The Attribute Based Encryption (ABE) scheme provides a new technique for embedding access policies cryptographically into encryption process. The article presents an overview of various existing attribute-based encryption schemes and traditional access control models. Also, the comparison of existing ABE schemes for cloud computing, on basis of various criteria is presented in the article.


Author(s):  
Padma Lochan Pradhan

This proposed literature survey provides basic data regarding the first step of risk identification and analysis to achieve a secured infrastructure. The demand and risk are two parts of the same coin. The demand is directly proportional to the risk, but preventive control is inversely proportional to risk. The necessity of preventive control in any organization has increased because of the changes in logic, structure, and the type of technology applied to services that generate risks. Finally, the business increases along with technology, which creates risks and spreads over its infrastructure. We have to focus on protecting, detecting, correcting, verifying and validating the Unix file system. This survey article proposes and resolves the Unix file system by applying a hardening, re-configuration and access control mechanism up to the highest level of preventive control.


Author(s):  
Stefan Piasecki

Gamification as a tool or procedure to add entertaining and motivating elements to usually non-entertaining environments such as schools or workplaces is becoming more and more popular. E-learning platforms like Moodle provide tools and sets of functions to add elements of gamification. An important factor, especially for education, is technology: individual achievements and progress can be recorded, measured, tracked and visualized and, therefore, identified and honored through bonus points, awards or rankings. This is where gamification can add some challenge and excitement to learning.


Author(s):  
Vishal Passricha ◽  
Ashish Chopra ◽  
Shubhanshi Singhal

Cloud storage (CS) is gaining much popularity nowadays because it offers low-cost and convenient network storage services. In this big data era, the explosive growth in digital data moves the users towards CS but this causes a lot of storage pressure on CS systems because a large volume of this data is redundant. Data deduplication is an effective data reduction technique. The dynamic nature of data makes security and ownership of data as a very important issue. Proof-of-ownership schemes are a robust way to check the ownership claimed by any owner. However, this method affects the deduplication process because encryption methods have varying characteristics. A convergent encryption (CE) scheme is widely used for secure data deduplication. The problem with the CE-based scheme is that the user can decrypt the cloud data while he has lost his ownership. This article addresses the problem of ownership revocation by proposing a secure deduplication scheme for encrypted data. The proposed scheme enhances the security against unauthorized encryption and poison attack on the predicted set of data.


Sign in / Sign up

Export Citation Format

Share Document