scholarly journals PHYSICAL CONTENT OF COMPUTER STEGANOGRAPHY

2021 ◽  
Vol 23 ◽  
pp. 27-32
Author(s):  
O. Polotai ◽  
O. Belej ◽  
N. Maltseva

Introduction. The development of computer technology has given a new impetus to the use of computer steganography. However, it is important to understand the physical content of this type of steganography.Purpose. The work aims to describe the practical use and physical content of the phenomenon of computer steganography, the results of the study on the hiding of files in the stegocontainer.Results. Describes the main ns currently computer steganography methods are actively used to solve the following tasks: Protection of confidential information from unauthorized access, overcoming monitoring and management of net-work resources, software camouflage, copyright protection, which is manifested in the use of digital watermarks, is one of the most promising areas of computer steganography. Among the methods of hiding information in images, the most common is the category of algorithms using the lower bits of the image data. They are considered in this paper. These algorithms are based on the fact that in some file formats, the lower bits of the values, although present in the file, but do not affect a person's perception of sound or image. The steganographic software S-Tools was chosen for the study. We created two test monotonous images with the size of 50 × 50 pixels in 24-bit bmp format to analyze the peculiarities of the placement of stego-data in container files. We chose black and white images for the study. A text file was hidden in each of the images, after which the reverse action was performed - extracting the file. As a result of hiding, two stego files were obtained. The paper compared the binary content of the original images and files containing private data. For comparison, the binary content of the black square image and the contents of the stegocontainer with a latent text file are given. Note that the contents of the container and the stego file are only partially listed, but the addresses of the memory cells have selected accordingly. The right column shows the contents of the memory cells in hexadecimal format. The bytes that display the colour of the square are set to "00" because the original image contains only black. We noted that the contents of the cells responsible for the image changed after hiding additional data (this reflected by cells with values of "01"). The paper also describes the procedure for hiding a group of different types of files. During the study, we found that the image file (1920 × 1080 pixels) with a volume of 6,220,854 bytes can hide 777,584 bytes of information.Conclusion. When using steganography, the program uses some algorithms that hide confidential data among the contents of the container: bits of the hidden file replace the bits of the original file at random positions. Thus, the size of the source file and the container file (containing the attached information) is the same, even if you hide a different number of files or different amounts of data.

2021 ◽  
Vol 11 (1) ◽  
pp. 48
Author(s):  
John Stein

(1) Background—the magnocellular hypothesis proposes that impaired development of the visual timing systems in the brain that are mediated by magnocellular (M-) neurons is a major cause of dyslexia. Their function can now be assessed quite easily by analysing averaged visually evoked event-related potentials (VERPs) in the electroencephalogram (EEG). Such analysis might provide a useful, objective biomarker for diagnosing developmental dyslexia. (2) Methods—in adult dyslexics and normally reading controls, we recorded steady state VERPs, and their frequency content was computed using the fast Fourier transform. The visual stimulus was a black and white checker board whose checks reversed contrast every 100 ms. M- cells respond to this stimulus mainly at 10 Hz, whereas parvocells (P-) do so at 5 Hz. Left and right visual hemifields were stimulated separately in some subjects to see if there were latency differences between the M- inputs to the right vs. left hemispheres, and these were compared with the subjects’ handedness. (3) Results—Controls demonstrated a larger 10 Hz than 5 Hz fundamental peak in the spectra, whereas the dyslexics showed the reverse pattern. The ratio of subjects’ 10/5 Hz amplitudes predicted their reading ability. The latency of the 10 Hz peak was shorter during left than during right hemifield stimulation, and shorter in controls than in dyslexics. The latter correlated weakly with their handedness. (4) Conclusion—Steady state visual ERPs may conveniently be used to identify developmental dyslexia. However, due to the limited numbers of subjects in each sub-study, these results need confirmation.


Author(s):  
ADIL GURSEL KARACOR ◽  
ERDAL TORUN ◽  
RASIT ABAY

Identifying the type of an approaching aircraft, should it be a helicopter, a fighter jet or a passenger plane, is an important task in both military and civilian practices. The task in question is normally done by using radar or RF signals. In this study, we suggest an alternative method that introduces the use of a still image instead of RF or radar data. The image was transformed to a binary black and white image, using a Matlab script which utilizes Image Processing Toolbox commands of Matlab, in order to extract the necessary features. The extracted image data of four different types of aircraft was fed into a three-layered feed forward artificial neural network for classification. Satisfactory results were achieved as the rate of successful classification turned out to be 97% on average.


Data Science ◽  
2021 ◽  
pp. 1-20
Author(s):  
Laura Boeschoten ◽  
Roos Voorvaart ◽  
Ruben Van Den Goorbergh ◽  
Casper Kaandorp ◽  
Martine De Vos

The General Data Protection Regulation (GDPR) grants all natural persons the right to access their personal data if this is being processed by data controllers. The data controllers are obliged to share the data in an electronic format and often provide the data in a so called Data Download Package (DDP). These DDPs contain all data collected by public and private entities during the course of a citizens’ digital life and form a treasure trove for social scientists. However, the data can be deeply private. To protect the privacy of research participants while using their DDPs for scientific research, we developed a de-identification algorithm that is able to handle typical characteristics of DDPs. These include regularly changing file structures, visual and textual content, differing file formats, differing file structures and private information like usernames. We investigate the performance of the algorithm and illustrate how the algorithm can be tailored towards specific DDP structures.


2019 ◽  
Author(s):  
Abhishek Singh

Abstract Background: The need for big data analysis requires being able to process large data which are being held fine-tuned for usage by corporates. It is only very recently that the need for big data has caught attention for low budget corporate groups and academia who typically do not have money and resources to buy expensive licenses of big data analysis platforms such as SAS. The corporates continue to work on SAS data format largely because of systemic organizational history and that the prior codes have been built on them. The data-providers continue to thus provide data in SAS formats. Acute sudden need has arisen because of this gap of data being in SAS format and the coders not having a SAS expertise or training background as the economic and inertial forces acting of having shaped these two class of people have been different. Method: We analyze the differences and thus the need for SasCsvToolkit which helps to generate a CSV file for a SAS format data so that the data scientist can then make use of his skills in other tools that can process CSVs such as R, SPSS, or even Microsoft Excel. At the same time, it also provides conversion of CSV files to SAS format. Apart from this, a SAS database programmer always struggles in finding the right method to do a database search, exact match, substring match, except condition, filters, unique values, table joins and data mining for which the toolbox also provides template scripts to modify and use from command line. Results: The toolkit has been implemented on SLURM scheduler platform as a `bag-of-tasks` algorithm for parallel and distributed workflow though serial version has also been incorporated. Conclusion: In the age of Big Data where there are way too many file formats and software and analytics environment each having their own semantics to deal with specific file types, SasCsvToolkit will find its functions very handy to a data engineer.


2014 ◽  
Vol 4 (1) ◽  
Author(s):  
Colette Leung

Bobet, Leah. Above. New York: Arthur A. Levine Books-Scholastic, 2012. PrintThis Young Adult urban fantasy novel takes place in present-day Toronto, Canada. The main character is Matthew, a teenager growing up in an underground, secret community known as Safe. This community was founded by Matthew’s guardian, Atticus, for disabled outcasts and people with abnormalities.  For example, Atticus has claws for hands, and Matthew has scales. In this underground community, Matthew is Teller, which means that he collects and remembers the stories of different individuals living in Safe. Matthew is in love with the traumatized girl Ariel, who can shape-shift into a bee and has wings. Ariel came to the Safe as a teenager, and lived in the city before then, but she is slow to trust others, including Matthew, and runs away frequently.Safe is threatened by an exile, known as Corner, who works with an army of shadows. Eventually, Corner invades Safe by following Ariel home after one of the times she ran away. This causes the community to disperse Above, which is actually downtown Toronto.  Once there, with the help of Ariel, Matthew has to reunite his community, and reclaim Safe. In order to do this, Matthew must discover the history of Corner, and its connection to Safe. He learns that there are two sides to every story, and not everything is black and white. Good people can make mistakes, and love and relationships are complex and defining elements of what it means to be human.Above has important messages about themes of “good” and “evil” and the gray areas in between. By blurring the lines between fantasy, magic, and medicine, these themes are easy to bridge into the real world. The focus on outcasts and disabled people gives the book a unique perspective, and the setting takes readers to both well-known and often passed over areas of downtown Toronto.The book suffers from poor setup, however, and slow character development.  Leah Bobet uses a stilted writing style, meant to reflect the main character’s education and state of mind.  Often this style makes the plotline difficult to follow, and undercuts some of the more intriguing descriptions of Toronto.  Readers are also launched into the world without explanation, which can make it difficult to figure out what is going on for the first half of the book. The story can be even more confusing as it is told in patchworks. Outside of Matthew’s main storyline, the narratives of other characters are interwoven into the book, so not all events are chronological.Above has a good premise that will appeal to the right group of young adults, but with the difficult writing level and the lack of setup, some of the target audience might lose interest before finishing the novel. It is worth nothing that some of the content deals with difficult topics, including mental illness, abuse, disability, poverty, gender-identification, people of different and mixed ethnicities, experimentation on people, and death.Recommended with Reservations: 2 out of 4 starsReviewer: Colette LeungColette Leung is a graduate student at the University of Alberta, working in the fields of Library and Information science and Humanities Computing who loves reading, cats, and tea. Her research interests focus around how digital tools can be used to explore fields such as literature, language, and history in new and innovative ways.


2019 ◽  
Vol 8 (4) ◽  
pp. 9685-9690

The current world is running around the word “Privacy”. Every individual’s aim is to secure their data and transactions so that no one can access them without proper authentication. In this digital era, all the data stored in the internet protected by a password. The general opinion is that a password can protect the data from being acquired by an unauthorized user. The issue is about what happens subsequently with an authorized login. Once we login into our account, all our actions, state of browser and timestamps are recorded in a simple text file known as “Cookie”. In this paper, we proposed a mechanism which is easy to implement and robust in providing authentication to the session cookie. This obstructs an unauthorized user from getting access to our private data. Our mechanism provides authentication by using the concept of hashing combined with a unique identifier.


Author(s):  
Janardan Kulkarni ◽  
Jay Prajapati ◽  
Sakshi Patel

A Cloud is a type of analogous and scattered system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources “ . cloud computing is the dynamic provisioning of IT capabilities (hardware, software, or services) from third parties over a network. However this technology is still in its initial stages of development, as it suffers from threats and vulnerabilities that prevent the users from trusting it. Various malicious activities from illegal users have threatened this technology such as data misuse, inflexible access control and limited monitoring. The occurrence of these threats may result into damaging or illegal access of critical and confidential data of users. This article is in order to describe the impact of those vulnerabilities and threats to create awareness among organisations and users so that they can Adopt this technology with trust And form a trusted provider Who has trusted security policies. Here we have defined cloud-specific vulnerabilities, cloud feature vulnerabilities and propose a reference vulnerabilities architecture of cloud computing and threats related cloud computing. Cloud security and privacy plays an important role to avoid cloud threats .Cloud Privacy Concerns the expression of or devotion to various legal and non- legal norms regarding the right to private life. Cloud Security Concerns the confidentiality, ease of use and reliability of data or information. As the development of cloud computing, issue of security has become a top priority. In this article we are going to discuss about the Characteristics of vulnerabilities , cloud vulnerabilities and cloud threats , Also how we can overcome or avoid them and keep our data safe.


2021 ◽  
Vol 1203 (3) ◽  
pp. 032014
Author(s):  
Jakub Motl ◽  
Albert Bradáč ◽  
Filip Suchomel ◽  
Kateřina Bucsuházy

Abstract The aim of this article is the comparison of vehicle headlamps in terms of pedestrians' visibility at nighttime conditions. The study was designed to gain results, which could serve as a basis for the pedestrian-vehicle accident analysis in terms of visibility during night drive. For this study were used comparable vehicles (same vehicle type and model year) with different headlamps type. Three different headlamps (halogen, xenon and LED headlamps) were used for the analysis. Experiments were carried out under similar conditions (straight road, nighttime, no disturbing factors). During a series of static tests, the vehicle approached at predefined distances to the figurant - pedestrian standing on the right side of the roadway. For the luminance analysis were used Luminance Distribution Analyser LumiDISP - software for analysing the luminance conditions based on evaluation of image data from digital photos.


2018 ◽  
Author(s):  
Pamela H Russell ◽  
Debashis Ghosh

AbstractThe radiology community has adopted several widely used standards for medical image files, including the popular DICOM (Digital Imaging and Communication in Medicine) and NIfTI (Neuroimaging Informatics Technology Initiative) standards. These file formats include image intensities as well as potentially extensive metadata. The NIfTI standard specifies a particular set of header fields describing the image and minimal information about the scan. DICOM headers can include any of over 4,000 available metadata attributes spanning a variety of topics. NIfTI files contain all slices for an image series, while DICOM files capture single slices and image series are typically organized into a directory. Each DICOM file contains metadata for the image series as well as the individual image slice.The programming environment R is popular for data analysis due to its free and open code, active ecosystem of tools and users, and excellent system of contributed packages. Currently, many published radiological image analyses are performed with proprietary software or custom unpublished scripts. However, R is increasing in popularity in this area due to several packages for processing and analysis of image files. While these R packages handle image import and processing, no existing package makes image metadata conveniently accessible. Extracting image metadata, combining across slices, and converting to useful formats can be prohibitively cumbersome, especially for DICOM files.We present radtools, an R package for smooth navigation of medical image data. Radtools makes the problem of extracting image metadata trivially simple, providing simple functions to explore and return information in familiar R data structures. Radtools also facilitates extraction of image data and viewing of image slices. The package is freely available under the MIT license at https://github.com/pamelarussell/radtools and is easily installable from the Comprehensive R Archive Network (https://cran.r-project.org/package=radtools).


Sign in / Sign up

Export Citation Format

Share Document