scholarly journals Improvement of images by using graduate transformations of their Fourier depictions

2021 ◽  
Vol 2 (2(58)) ◽  
pp. 16-19
Author(s):  
Ihor Polovynko ◽  
Lubomyr Kniazevich

The object of research is low-quality digital images. The presented work is devoted to the problem of digital processing of low quality images, which is one of the most important tasks of data science in the field of extracting useful information from a large data set. It is proposed to carry out the process of image enhancement by means of tonal processing of their Fourier images. The basis for this approach is the fact that Fourier images are described by brightness values in a wide range of values, which can be significantly reduced by gradation transformations. The work carried out the Fourier transform of the image with the separation of the amplitude and phase. The important role of the phase in the process of forming the image obtained after the implementation of the inverse Fourier transform is shown. Although the information about the signal amplitude is lost during the phase analysis, nevertheless all the main details correspond accurately to the initial image. This suggests that when modifying the Fourier spectra of images, it is necessary to take into account the effect on both the amplitude and the phase of the object under study. The effectiveness of the proposed method is demonstrated by the example of space images of the Earth's surface. It is shown that after the gradation logarithmic Fourier transform of the image and the inverse Fourier transform, an image is obtained that is more contrasting than the original one, will certainly facilitate the work with it in the process of visual analysis. To explain the results obtained, the schedule of the obtained gradation transformation into the Mercator series was carried out. It is shown that the resulting image consists of two parts. The first of them corresponds to the reproduction of the original image obtained by the inverse Fourier transform, and the second performs smoothing of its brightness, similar to the action of the combined method of spatial image enhancement. When using the proposed method, preprocessing is also necessary, which, as a rule, includes operations necessary for centering the Fourier image, as well as converting the original data into floating point format.

2020 ◽  
Vol 8 ◽  
Author(s):  
Devasis Bassu ◽  
Peter W. Jones ◽  
Linda Ness ◽  
David Shallcross

Abstract In this paper, we present a theoretical foundation for a representation of a data set as a measure in a very large hierarchically parametrized family of positive measures, whose parameters can be computed explicitly (rather than estimated by optimization), and illustrate its applicability to a wide range of data types. The preprocessing step then consists of representing data sets as simple measures. The theoretical foundation consists of a dyadic product formula representation lemma, and a visualization theorem. We also define an additive multiscale noise model that can be used to sample from dyadic measures and a more general multiplicative multiscale noise model that can be used to perturb continuous functions, Borel measures, and dyadic measures. The first two results are based on theorems in [15, 3, 1]. The representation uses the very simple concept of a dyadic tree and hence is widely applicable, easily understood, and easily computed. Since the data sample is represented as a measure, subsequent analysis can exploit statistical and measure theoretic concepts and theories. Because the representation uses the very simple concept of a dyadic tree defined on the universe of a data set, and the parameters are simply and explicitly computable and easily interpretable and visualizable, we hope that this approach will be broadly useful to mathematicians, statisticians, and computer scientists who are intrigued by or involved in data science, including its mathematical foundations.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Luca Pappalardo ◽  
Paolo Cintia ◽  
Alessio Rossi ◽  
Emanuele Massucco ◽  
Paolo Ferragina ◽  
...  

Abstract Soccer analytics is attracting increasing interest in academia and industry, thanks to the availability of sensing technologies that provide high-fidelity data streams for every match. Unfortunately, these detailed data are owned by specialized companies and hence are rarely publicly available for scientific research. To fill this gap, this paper describes the largest open collection of soccer-logs ever released, containing all the spatio-temporal events (passes, shots, fouls, etc.) that occured during each match for an entire season of seven prominent soccer competitions. Each match event contains information about its position, time, outcome, player and characteristics. The nature of team sports like soccer, halfway between the abstraction of a game and the reality of complex social systems, combined with the unique size and composition of this dataset, provide an ideal ground for tackling a wide range of data science problems, including the measurement and evaluation of performance, both at individual and at collective level, and the determinants of success and failure.


1989 ◽  
Vol 43 (1) ◽  
pp. 33-37 ◽  
Author(s):  
Charles K. Mann ◽  
Thomas J. Vickers

The need may arise to use spectral data with a sample interval and slit width that differ from those used when the data were recorded. Results are presented of an examination of the quantitative accuracy of conversion of these parameters using Raman spectra. Assuming adequate sampling in the original set, the interval can be changed by forward and inverse Fourier transform with zero-filling. This procedure only permits expansion by powers of two, but by overexpansion of the original data set, points can be selected to achieve or very closely approximate other factors. Expansion by a factor of ten is demonstrated. The effect of varying slit function is compensated for by deconvoluting the data against the slit function originally used and then convoluting with that which corresponds to the desired condition. Conditions for carrying out this conversion with less than 0.1% error are described.


2019 ◽  
Vol 50 (4) ◽  
pp. 693-702 ◽  
Author(s):  
Christine Holyfield ◽  
Sydney Brooks ◽  
Allison Schluterman

Purpose Augmentative and alternative communication (AAC) is an intervention approach that can promote communication and language in children with multiple disabilities who are beginning communicators. While a wide range of AAC technologies are available, little is known about the comparative effects of specific technology options. Given that engagement can be low for beginning communicators with multiple disabilities, the current study provides initial information about the comparative effects of 2 AAC technology options—high-tech visual scene displays (VSDs) and low-tech isolated picture symbols—on engagement. Method Three elementary-age beginning communicators with multiple disabilities participated. The study used a single-subject, alternating treatment design with each technology serving as a condition. Participants interacted with their school speech-language pathologists using each of the 2 technologies across 5 sessions in a block randomized order. Results According to visual analysis and nonoverlap of all pairs calculations, all 3 participants demonstrated more engagement with the high-tech VSDs than the low-tech isolated picture symbols as measured by their seconds of gaze toward each technology option. Despite the difference in engagement observed, there was no clear difference across the 2 conditions in engagement toward the communication partner or use of the AAC. Conclusions Clinicians can consider measuring engagement when evaluating AAC technology options for children with multiple disabilities and should consider evaluating high-tech VSDs as 1 technology option for them. Future research must explore the extent to which differences in engagement to particular AAC technologies result in differences in communication and language learning over time as might be expected.


2019 ◽  
Vol 16 (7) ◽  
pp. 808-817 ◽  
Author(s):  
Laxmi Banjare ◽  
Sant Kumar Verma ◽  
Akhlesh Kumar Jain ◽  
Suresh Thareja

Background: In spite of the availability of various treatment approaches including surgery, radiotherapy, and hormonal therapy, the steroidal aromatase inhibitors (SAIs) play a significant role as chemotherapeutic agents for the treatment of estrogen-dependent breast cancer with the benefit of reduced risk of recurrence. However, due to greater toxicity and side effects associated with currently available anti-breast cancer agents, there is emergent requirement to develop target-specific AIs with safer anti-breast cancer profile. Methods: It is challenging task to design target-specific and less toxic SAIs, though the molecular modeling tools viz. molecular docking simulations and QSAR have been continuing for more than two decades for the fast and efficient designing of novel, selective, potent and safe molecules against various biological targets to fight the number of dreaded diseases/disorders. In order to design novel and selective SAIs, structure guided molecular docking assisted alignment dependent 3D-QSAR studies was performed on a data set comprises of 22 molecules bearing steroidal scaffold with wide range of aromatase inhibitory activity. Results: 3D-QSAR model developed using molecular weighted (MW) extent alignment approach showed good statistical quality and predictive ability when compared to model developed using moments of inertia (MI) alignment approach. Conclusion: The explored binding interactions and generated pharmacophoric features (steric and electrostatic) of steroidal molecules could be exploited for further design, direct synthesis and development of new potential safer SAIs, that can be effective to reduce the mortality and morbidity associated with breast cancer.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


2021 ◽  
Vol 11 (4) ◽  
pp. 1431
Author(s):  
Sungsik Wang ◽  
Tae Heung Lim ◽  
Kyoungsoo Oh ◽  
Chulhun Seo ◽  
Hosung Choo

This article proposes a method for the prediction of wide range two-dimensional refractivity for synthetic aperture radar (SAR) applications, using an inverse distance weighted (IDW) interpolation of high-altitude radio refractivity data from multiple meteorological observatories. The radio refractivity is extracted from an atmospheric data set of twenty meteorological observatories around the Korean Peninsula along a given altitude. Then, from the sparse refractive data, the two-dimensional regional radio refractivity of the entire Korean Peninsula is derived using the IDW interpolation, in consideration of the curvature of the Earth. The refractivities of the four seasons in 2019 are derived at the locations of seven meteorological observatories within the Korean Peninsula, using the refractivity data from the other nineteen observatories. The atmospheric refractivities on 15 February 2019 are then evaluated across the entire Korean Peninsula, using the atmospheric data collected from the twenty meteorological observatories. We found that the proposed IDW interpolation has the lowest average, the lowest average root-mean-square error (RMSE) of ∇M (gradient of M), and more continuous results than other methods. To compare the resulting IDW refractivity interpolation for airborne SAR applications, all the propagation path losses across Pohang and Heuksando are obtained using the standard atmospheric condition of ∇M = 118 and the observation-based interpolated atmospheric conditions on 15 February 2019. On the terrain surface ranging from 90 km to 190 km, the average path losses in the standard and derived conditions are 179.7 dB and 182.1 dB, respectively. Finally, based on the air-to-ground scenario in the SAR application, two-dimensional illuminated field intensities on the terrain surface are illustrated.


Sign in / Sign up

Export Citation Format

Share Document