International Journal of Scientific Research in Computer Science Engineering and Information Technology
Latest Publications


TOTAL DOCUMENTS

1129
(FIVE YEARS 1127)

H-INDEX

2
(FIVE YEARS 2)

Published By Technoscience Academy

2456-3307
Updated Tuesday, 18 January 2022

Author(s):  
Suhas S ◽  
Dr. C. R. Venugopal

An enhanced classification system for classification of MR images using association of kernels with support vector machine is developed and presented in this paper along with the design and development of content-based image retrieval (CBIR) system. Content of image retrieval is the process of finding relevant image from large collection of image database using visual queries. Medical images have led to growth in large image collection. Oriented Rician Noise Reduction Anisotropic Diffusion filter is used for image denoising. A modified hybrid Otsu algorithm termed is used for image segmentation. The texture features are extracted using GLCM method. Genetic algorithm with Joint entropy is adopted for feature selection. The classification is done by support vector machine along with various kernels and the performance is validated. A classification accuracy of 98.83% is obtained using SVM with GRBF kernel. Various features have been extracted and these features are used to classify MR images into five different categories. Performance of the MC-SVM classifier is compared with different kernel functions. From the analysis and performance measures like classification accuracy, it is inferred that the brain and spinal cord MRI classification is best done using MC- SVM with Gaussian RBF kernel function than linear and polynomial kernel functions. The proposed system can provide best classification performance with high accuracy and low error rate.


Author(s):  
Kiran Khandarkar ◽  
Dr. Sharvari Tamne

The research provides a method for improving change detection in SAR images using a fusion object and a supervised classification system. To remove noise from the input image, we use the DnCNN denoising approach. The data from the first image is then processed with the mean ratio operator. The log ratio operator is used to process the second image. These two images are fused together using SWT-based image fusion, and the output is sent to a supervise classifier for change detection.


Author(s):  
Divya Tiwari ◽  
Surbhi Thorat

Fake news dissemination is a critical issue in today’s fast-changing network environment. The issues of online fake news have attained an increasing eminence in the diffusion of shaping news stories online. This paper deals with the categorical cyber terrorism threats on social media and preventive approach to minimize their issues. Misleading or unreliable information in form of videos, posts, articles, URLs are extensively disseminated through popular social media platforms such as Facebook, Twitter, etc. As a result, editors and journalists are in need of new tools that can help them to pace up the verification process for the content that has been originated from social media. existing classification models for fake news detection have not completely stopped the spread because of their inability to accurately classify news, thus leading to a high false alarm rate. This study proposed a model that can accurately identify and classify deceptive news articles content infused on social media by malicious users. The news content, social-context features and the respective classification of reported news was extracted from the PHEME dataset using entropy-based feature selection. The selected features were normalized using Min-Max Normalization techniques. The model was simulated and its performance was evaluated by benchmarking with an existing model using detection accuracy, sensitivity, and precision as metrics. The result of the evaluation showed a higher 17.25% detection accuracy, 15.78% sensitivity, but lesser 0.2% precision than the existing model, Thus, the proposed model detects more fake news instances accurately based on news content and social content perspectives. This indicates that the proposed classification model has a better detection rate, reduces the false alarm rate of news instances and thus detects fake news more accurately.


Author(s):  
Pinjari Vali Basha

<p>By rapid transformation of technology, huge amount of data (structured data and Un Structured data) is generated every day.  With the aid of 5G technology and IoT the data generated and processed every day is very large. If we dig deeper the data generated approximately 2.5 quintillion bytes.<br> This data (Big Data) is stored and processed with the help of Hadoop framework. Hadoop framework has two phases for storing and retrieve the data in the network.</p> <ul> <li>Hadoop Distributed file System (HDFS)</li> <li>Map Reduce algorithm</li> </ul> <p>In the native Hadoop framework, there are some limitations for Map Reduce algorithm. If the same job is repeated again then we have to wait for the results to carry out all the steps in the native Hadoop. This led to wastage of time, resources.  If we improve the capabilities of Name node i.e., maintain Common Job Block Table (CJBT) at Name node will improve the performance. By employing Common Job Block Table will improve the performance by compromising the cost to maintain Common Job Block Table.<br> Common Job Block Table contains the meta data of files which are repeated again. This will avoid re computations, a smaller number of computations, resource saving and faster processing. The size of Common Job Block Table will keep on increasing, there should be some limit on the size of the table by employing algorithm to keep track of the jobs. The optimal Common Job Block table is derived by employing optimal algorithm at Name node.</p>


Author(s):  
Arighna Chakraborty ◽  
Asoke Nath

Conversational AI is an interesting problem in the field of Natural Language Processing and combines natural language processing with machine learning. There has been quite a lot of advancements in this field with each new model architecture capable of processing more data, better optimisation and execution, handling more parameters and having higher accuracy and efficiency. This paper discusses various trends and advancements in the field of natural language processing and conversational AI like RNNs and RNN based architectures such as LSTMs, Sequence to Sequence models, and finally, the Transformer networks, the latest in NLP and conversational AI. The authors have given a comparison between the various models discussed in terms of efficiency/accuracy and also discussed the scope and challenges in Transformer models.


Author(s):  
Waheed Muhammad SANYA ◽  
Gaurav BAJPAI ◽  
Haji Ali HAJI

Vision relieves humans to understand the environmental deviations over a period. These deviations are seen by capturing the images. The digital image plays a dynamic role in everyday life. One of the processes of optimizing the details of an image whilst removing the random noise is image denoising. It is a well-explored research topic in the field of image processing. In the past, the progress made in image denoising has advanced from the improved modeling of digital images. Hence, the major challenges of the image process denoising algorithm is to advance the visual appearance whilst preserving the other details of the real image. Significant research today focuses on wavelet-based denoising methods. This research paper presents a new approach to understand the Sobel imaging process algorithm on the Linux platform and develop an effective algorithm by using different optimization techniques on SABRE i.MX_6. Our work concentrated more on the image process algorithm optimization. By using the OpenCV environment, this paper is intended to simulate a Salt and Pepper noisy phenomenon and remove the noisy pixels by using Median Filter Algorithm. The Sobel convolution method included and used in the design of a Sobel Filter and then process the image following the median filter, to achieve an effective edge detection result. Finally, this paper optimizes the algorithm on SABRE i.MX_6 Linux environment. By using algorithmic optimization (lower complexity algorithm in the mathematical sense, using appropriate data structures), optimization for RISC (loop unrolling) processors, including optimization for efficient use of hardware resources (access to data, cache management and multi-thread), this paper analyzed the different response parameters of the system with varied inputs, different compiler options (O1, O2, or O3), and different doping degrees. The proposed denoising algorithm shows the meaningful addition of the visual quality of the images and the algorithmic optimization assessment.


Author(s):  
Lokesh S ◽  
Jayasri B. S

A Cross Layered framework is an important concept in today’s world given the abundant usage of both single-path and multi path wireless network architectures. One of the important design issues in the development of a robust framework such as this is the design of an Optimization Agent or an OA. In recent days of wireless and wired ad-hoc networks, cross-layer design was brought about a few years back to explore attached optimization at different layers. In order to describe solutions in cross-layered design, the Open System Intercommunications model was employed. However, it is clear that no voice and reference mechanism exists to aid optimization, which could effectively halt effective adaptability and deployment of cross-layered solutions. In this study, we suggest some hypotheses regarding how to model and create cross-layer solutions using the OSI layered method. We use the aforementioned method to analyse and simulate a particular type of cross-layered solution, namely energy-aware routing protocols. We use a layered approach to examine two proposals that are accessible in the literature. The applied strategy leads to the creation of an energy- aware, one-of-a-kind solution that outperforms prior versions and provides interesting and clear insights into the function that each layer plays in the overall optimization process. The network throughput, utilization, and reliability have all increased practically rapidly in the last few years. With the emergence of broadband wireless and wired cellular networks, as well as mobile adhoc networks (MANETs) and improved computational capacity, a new generation of apps, especially real-time multimedia applications, has emerged. Delivering real-time multimedia traffic across a sophisticated network like the Internet could be a particularly difficult undertaking, as these applications have stringent bandwidth and other quality-of-service (QoS) requirements.


Author(s):  
Sagarmoy Ganguly ◽  
Asoke Nath

Quantum cryptography is a comparatively new and special type of cryptography which uses Quantum mechanics to provide unreal protection of data/information and unconditionally secure communications. This is achieved with Quantum Key Distribution (QKD) protocols which is a representation of an essential practical application of Quantum Computation. In this paper the authors will venture the concept of QKD by reviewinghow QKD works, the authors shall take a look at few protocols of QKD, followed by a practical example of Quantum Cryptography using QKD and certain limitations from the perspective of Computer Science in specific and Quantum Physics in general.


Author(s):  
Ropa Roy ◽  
Asoke Nath

A quantum gate or quantum logic gate is an elementary quantum circuit working on a small number of qubits. It means that quantum gates can grasp two primary feature of quantum mechanics that are entirely out of reach for classical gates : superposition and entanglement. In simpler words quantum gates are reversible. In classical computing sets of logic gates are connected to construct digital circuits. Similarly, quantum logic gates operates on input states that are generally in superposition states to compute the output. In this paper the authors will discuss in detail what is single and multiple qubit gates and scope and challenges in quantum gates.


Sign in / Sign up

Export Citation Format

Share Document