Journal of Information Technology and Digital World - September 2019
Latest Publications


TOTAL DOCUMENTS

64
(FIVE YEARS 64)

H-INDEX

5
(FIVE YEARS 5)

Published By Inventive Research Organization

2582-418x

Author(s):  
Arun Agarwal ◽  
Chandan Mohanta ◽  
Gourav Misra

The 5G mobile communication has now become commercially available. Furthermore, research across the globe has begun to improve the system beyond 5G and it is anticipated that 6G will deliver higher quality services and energy efficiency than 5G. The mobile network architecture needs to be redesigned to meet the requirements of the future. In the wake of the commercial rollout of the 5G model, both users and developers have realized the limitations of the system when compared to the system's original premise of being able to support the vast applications of connected devices. The article discusses the related technologies that can contribute to a robust and seamless network service. An upheaval in the use of vast mobile applications, especially those powered and managed by AI, has opened the doors to discussion on how mobile communication will evolve in the future. 6G is expected to go beyond being merely a mobile internet service provider to support the omnipresent AI services that will form the rock bed of end-to-end connected network-based devices. Moreover, the technologies that support 6G services and comprehensive research that enables this level of technical prowess have also been identified here. This paper presents a collective wide-angle vision that will facilitate a better understanding of the features of the 6G system.


Author(s):  
R. Kanthavel

Multimedia data in various forms is now readily available because of the widespread usage of Internet technology. Unauthorized individuals abuse multimedia material, for which they should not have access to, by disseminating it over several web pages, to defraud the original copyright owners. Numerous patient records have been compromised during the surge in COVID-19 incidents. Adding a watermark to any medical or defense documents is recommended since it protects the integrity of the information. This proposed work is recognized as a new unique method since an innovative technique is being implemented. The resilience of the watermarked picture is quite crucial in the context of steganography. As a result, the emphasis of this research study is on the resilience of watermarked picture methods. Moreover, the two-stage authentication for watermarking is built with key generation in the section on robust improvement. The Fast Fourier transform (FFT) is used in the entire execution process of the suggested framework in order to make computing more straightforward. With the Singular Value Decomposition (SVD) accumulation of processes, the overall suggested architecture becomes more resilient and efficient. A numerous quality metrics are utilized to find out how well the created technique is performing in terms of evaluation. In addition, several signal processing attacks are used to assess the effectiveness of the watermarking strategy.


Author(s):  
Judy Simon

Computer vision, also known as computational visual perception, is a branch of artificial intelligence that allows computers to interpret digital pictures and videos in a manner comparable to biological vision. It entails the development of techniques for simulating biological vision. The aim of computer vision is to extract more meaningful information from visual input than that of a biological vision. Computer vision is exploding due to the avalanche of data being produced today. Powerful generative models, such as Generative Adversarial Networks (GANs), are responsible for significant advances in the field of picture creation. The focus of this research is to concentrate on textual content descriptors in the images used by GANs to generate synthetic data from the MNIST dataset to either supplement or replace the original data while training classifiers. This can provide better performance than other traditional image enlarging procedures due to the good handling of synthetic data. It shows that training classifiers on synthetic data are as effective as training them on pure data alone, and it also reveals that, for small training data sets, supplementing the dataset by first training GANs on the data may lead to a significant increase in classifier performance.


Author(s):  
Joy Iong-Zong Chen ◽  
Lu-Tsou Yeh

Waiting for anything is undesirable by most of the human beings. Especially in the case of digital money transactions, most of the people may have doubtful thoughts on their mind about the success rate of their transactions while taking a longer processing time. The Unified Payment Interface (UPI) system was developed in India for minimizing the typographic works during the digital money transaction process. The UPI system has a separate UPI identification number of each individual consisting of their name, bank name, branch name, and account number. Therefore, sharing of account information has become easier and it reduces the chances of typographic errors in digital transaction applications. Sharing of UPI details are also made easy and secure with Quick Response (QR) code scanning methods. However, a digital transaction like UPI requires a lot of servers to be operated for a single transaction same as in National Electronic Fund Transfer (NEFT) and Immediate Payment Services (IMPS) in India. This increases the waiting time of digital transactions due to poor server communication and higher volume of payment requests on a particular server. The motive of the proposed work is to minimize the server communications by employing a distributed blockchain system. The performance is verified with a simulation experiment on BlockSim simulator in terms of transaction success rate and processing time over the traditional systems.


Author(s):  
R. Asokan ◽  
T. Vijayakumar

Noise can scramble a message that is sent. This is true for both voicemails and digital communications transmitted to and from computer systems. During transmission, mistakes tend to happen. Computer memory is the most commonplace to use Hamming code error correction. With extra parity/redundancy bits added to Hamming code, single-bit errors may be detected and corrected. Short-distance data transmissions often make use of Hamming coding. The redundancy bits are interspersed and evacuated subsequently when scaling it for longer data lengths. The new hamming code approach may be quickly and easily adapted to any situation. As a result, it's ideal for sending large data bitstreams since the overhead bits per data bit ratio is much lower. The investigation in this article is extended Hamming codes for product codes. The proposal particularly emphasises on how well it functions with low error rate, which is critical for multimedia wireless applications. It provides a foundation and a comprehensive set of methods for quantitatively evaluating this performance without the need of time-consuming simulations. It provides fresh theoretical findings on the well-known approximation, where the bit error rate roughly equal to the frame error rate times the minimal distance to the codeword length ratio. Moreover, the analytical method is applied to actual design considerations such as shorter and punctured codes along with the payload and redundancy bits calculation. Using the extended identity equation on the dual codes, decoding can be done at the first instance. The achievement of 43.48% redundancy bits is obtained during the testing process which is a huge proportion reduced in this research work.


Author(s):  
R. Kanthavel ◽  
R. Dhaya

There is a need for better medical and preclinical instruments to diagnose knee OA in its initial phases owing to the increase occurrence of knee osteoarthritis (OA), a devastating knee joint degeneration. Osteoarthritis commonly affects patients who are obese and those above the age of 60. This mainly happens to age down and over-weighted people. The goal is to provide practical methods for assessing the seriousness of knee OA quickly and with human consistency. We also present Changes that affect your chances of getting sick of knee osteoarthritis, Treatment of knee osteoarthritis and the Prevention methods of knee osteoarthritis.


Author(s):  
Akey Sungheetha ◽  
Rajesh Sharma R

Over the last decade, remote sensing technology has advanced dramatically, resulting in significant improvements on image quality, data volume, and application usage. These images have essential applications since they can help with quick and easy interpretation. Many standard detection algorithms fail to accurately categorize a scene from a remote sensing image recorded from the earth. A method that uses bilinear convolution neural networks to produce a lessweighted set of models those results in better visual recognition in remote sensing images using fine-grained techniques. This proposed hybrid method is utilized to extract scene feature information in two times from remote sensing images for improved recognition. In layman's terms, these features are defined as raw, and only have a single defined frame, so they will allow basic recognition from remote sensing images. This research work has proposed a double feature extraction hybrid deep learning approach to classify remotely sensed image scenes based on feature abstraction techniques. Also, the proposed algorithm is applied to feature values in order to convert them to feature vectors that have pure black and white values after many product operations. The next stage is pooling and normalization, which occurs after the CNN feature extraction process has changed. This research work has developed a novel hybrid framework method that has a better level of accuracy and recognition rate than any prior model.


Author(s):  
Dhaya R.

In recent years, digital watermarking has improved the accuracy and resistance of watermarked images against many assaults, such as various noises and random dosage characteristics. Because, based on the most recent assault, all existing watermarking research techniques have an acceptable level of resistance. The deep learning approach is one of the most remarkable methods for guaranteeing maximal resistance in the watermarking system's digital image processing. In the digital watermarking technique, a smaller amount of calculation time with high robustness has recently become a difficult challenge. In this research study, the light weight convolution neural network (LW-CNN) technique is introduced and implemented for the digital watermarking scheme, which has more resilience than any other standard approaches. Because of the LW-CNN framework's feature selection, the calculation time has been reduced. Furthermore, we have demonstrated the robustness of two distinct assaults, collusion and geometric type. This research work has reduced the calculation time and made the system more resistant to current assaults.


Sign in / Sign up

Export Citation Format

Share Document