Network and Communication Technology Innovations for Web and IT Advancement
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781466621572, 9781466621589

Author(s):  
Maytham Safar ◽  
Hasan Al-Hamadi ◽  
Dariush Ebrahimi

Wireless sensor networks (WSN) have emerged in many applications as a platform to collect data and monitor a specified area with minimal human intervention. The initial deployment of WSN sensors forms a network that consists of randomly distributed devices/nodes in a known space. Advancements have been made in low-power micro-electronic circuits, which have allowed WSN to be a feasible platform for many applications. However, there are two major concerns that govern the efficiency, availability, and functionality of the network—power consumption and fault tolerance. This paper introduces a new algorithm called Power Efficient Cluster Algorithm (PECA). The proposed algorithm reduces the power consumption required to setup the network. This is accomplished by effectively reducing the total number of radio transmission required in the network setup (deployment) phase. As a fault tolerance approach, the algorithm stores information about each node for easier recovery of the network should any node fail. The proposed algorithm is compared with the Self Organizing Sensor (SOS) algorithm; results show that PECA consumes significantly less power than SOS.


Author(s):  
Wael Toghuj ◽  
Ghazi I. Alkhatib

Digital communication systems are an important part of modern society, and they rely on computers and networks to achieve critical tasks. Critical tasks require systems with a high level of reliability that can provide continuous correct operations. This paper presents a new algorithm for data encoding and decoding using a two-dimensional code that can be implemented in digital communication systems, electronic memories (DRAMs and SRAMs), and web engineering. The developed algorithms correct three errors in codeword and detect four, reaching an acceptable performance level. The program that is based on these algorithms enables the modeling of error detection and correction processes, optimizes the redundancy of the code, monitors the decoding procedures, and defines the speed of execution. The performance of the derived code improves error detection and correction over the classical code and with less complexity. Several extensible applications of the algorithms are also given.


Author(s):  
Mayayuki Shinohara ◽  
Akira Hattori ◽  
Shigenori Ioroi ◽  
Hiroshi Tanaka ◽  
Haruo Hayami ◽  
...  

This paper presents a hazard/crime incident information sharing system using cell phones. Cell phone penetration is nearly 100% among adults in Japan, and they function as a telecommunication tool as well as a Global Positioning System (GPS) and camera. Open source software (Apache, Postfix, and MySQL) is installed on a system server, and together with the information service provided by Google Maps, are used to satisfy system requirements for the local community. Conventional systems deliver information to all people registered in the same block, even if an incident occurred far from their house. The key feature of the proposed system is that the distribution range of the hazard notification e-mail messages is determined by the geometrical distance from the incident location to the residence of each registered member. The proposed system applies not only to conventional cell phones but also smart phones, which are rapidly becoming popular in Japan. The new system functionality has been confirmed by a trial using members of the local community. System operation began after the successful trial and a training meeting for the local residents. System design, verification results, and operating status are described in this paper.


Author(s):  
Mohammad Ali H. Eljinini

In this paper, the need for the right information for patients with chronic diseases is elaborated, followed by some scenarios of how the semantic web can be utilised to retrieve useful and precise information by stakeholders. In previous work, the author has demonstrated the automation of knowledge acquisition from the current web is becoming an important step towards this goal. The aim was twofold; first to learn what types of information exist in chronic disease-related websites, and secondly how to extract and structure such information into machine understandable form. It has been shown that these websites exhibit many common concepts which resulted in the construction of the ontology to guide in extracting information for new unseen websites. Also, the study has resulted in the development of a platform for information extraction that utilises the ontology. Continuous work has opened many issues which are disussed in this paper. While further work is still needed, the experiments to date have shown encouraging results.


Author(s):  
Khadhir Bekki ◽  
Hafida Belachir

This article proposes a flexible way in business process modeling and managing. Today, business process needs to be more flexible and adaptable. The regulations and policies in organizations, as origins of change, are often expressed in terms of business rules. The ECA (Event-condition-action) rule is a popular way to incorporate flexibility into a process design. To raise the flexibility in the business processes, the authors consider governing any business activity through ECA rules based on business rules. For adaptability, the separation of concerns supports adaptation in several ways. To cope with flexibility and adaptability, the authors propose a new multi concern rule based model. For each concern, each business rule is formalized using their CECAPENETE formalism (Concern -Event-Condition-Action-Post condition- check Execution- Number of check -Else-Trigger-else Event). Then, the rules based process is translated into a graph of rules that is analyzed in terms of relations between concerns, reliably and flexibility.


Author(s):  
Petr Aksenov ◽  
Kris Luyten ◽  
Karin Coninx

Localisation is a standard feature in many mobile applications today, and there are numerous techniques for determining a user’s location both indoors and outdoors. The provided location information is often organised in a format tailored to a particular localisation system’s needs and restrictions, making the use of several systems in one application cumbersome. The presented approach models the details of localisation systems and uses this model to create a unified view on localisation in which special attention is paid to uncertainty coming from different localisation conditions and to its presentation to the user. The work discusses technical considerations, challenges and issues of the approach, and reports on a user study on the acceptance of a mobile application’s behaviour reflecting the approach. The results of the study show the suitability of the approach and reveal users’ preference toward automatic and informed changes they experienced while using the application.


Author(s):  
Abdallah Al-Tahan Al-Nu’aimi

This article introduces intelligent watermarking scheme to protect Web images from attackers who try to counterfeit the copyright to damage the rightful ownership. Using secret signs and logos that are embedded within the digital images, the technique can investigate technically the ownership claim. Also, the nature of each individual image is taken into consideration which gives more reliable results. The colour channel used was chosen depending on the value of its standard deviation to compromise between robustness and invisibility of the watermarks. Several types of test images, logos, attacks and evaluation metrics were used to examine the performance of the techniques used. Subjective and objective tests were used to check visually and mathematically the solidity and weakness of the used scheme.


Author(s):  
Flora S. Tsai

This paper proposes probabilistic models for social media mining based on the multiple attributes of social media content, bloggers, and links. The authors present a unique social media classification framework that computes the normalized document-topic matrix. After comparing the results for social media classification on real-world data, the authors find that the model outperforms the other techniques in terms of overall precision and recall. The results demonstrate that additional information contained in social media attributes can improve classification and retrieval results.


Author(s):  
Lu Ge ◽  
Gaojie J. Chen ◽  
Jonathon. A. Chambers

The implementation of cooperative diversity with relays has advantages over point-to-point multiple-input multiple-output (MIMO) systems, in particular, overcoming correlated paths due to small inter-element spacing. A simple transmitter with one antenna may exploit cooperative diversity or space time coding gain through distributed relays. In this paper, similar distributed transmission is considered with the golden code, and the authors propose a new strategy for relay selection, called the maximum-mean selection policy, for distributed transmission with the full maximum likelihood (ML) decoding and sphere decoding (SD) based on a wireless relay network. This strategy performs a channel strength tradeoff at every relay node to select the best two relays for transmission. It improves on the established one-sided selection strategy of maximum-minimum policy. Simulation results comparing the bit error rate (BER) based on different detectors and a scheme without relay selection, with the maximum-minimum and maximum-mean selection schemes confirm the performance advantage of relay selection. The proposed strategy yields the best performance of the three methods.


Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

A traditional crawler picks up a URL, retrieves the corresponding page and extracts various links, adding them to the queue. A deep Web crawler, after adding links to the queue, checks for forms. If forms are present, it processes them and retrieves the required information. Various techniques have been proposed for crawling deep Web information, but much remains undiscovered. In this paper, the authors analyze and compare important deep Web information crawling techniques to find their relative limitations and advantages. To minimize limitations of existing deep Web crawlers, a novel architecture is proposed based on QIIIEP specifications (Sharma & Sharma, 2009). The proposed architecture is cost effective and has features of privatized search and general search for deep Web data hidden behind html forms.


Sign in / Sign up

Export Citation Format

Share Document