scholarly journals Statistical analysis and characterization of Landsat 8 satellite images of forest wildfires regions

Author(s):  
Sandra Paola Hernández-López ◽  
Juan Israel Yañez-Vargas ◽  
Andrea Gonzalez-Ramirez ◽  
Deni Torres-Roman

The increase in the increase in wildfires throughout the world is largely due to increases in temperature and even to an increase in the carelessness of the population in leaving a large amount of the garbage in forests. Using Python and Matlab programs were as working medium. We performed the preprocessing on multispectral images obtained by the Landsat 8 satellite with and without wildfires, which consists of three steps: alignment, characterization and normalization, with the intention of standardization the images. From obtaining the spectral signatures of wildfires and metallic structures, boxes and whiskers diagrams, Shannon entropy and mutual information from the images, there are similar behavior in bands 6, 7, 8, 10 and 11, with more relevant information, taking into account that each image is formed by 11 bands, and in bands 1, 2, 3, 4, 5, 8 and 9 there is less information, SVD decomposition allows to have the best k-rank approximation to the original data matrix. The purpose of this analysis is to reduce the computational complexity.

1995 ◽  
Vol 60 (6) ◽  
pp. 960-965 ◽  
Author(s):  
Štefan Hatrík ◽  
Jozef Lehotay ◽  
Jozef Čižmárik

The advanced mathematical method of neural network was employed for studying of surface anaesthetical activity of three homologous series of 2-, 3- and 4-alkoxy substituted morpholinoethyl, piperidinoethyl and azepanylethyl esters of phenylcarbamic acids. RP-HPLC capacity factors were used for the characterization of the lipophility of tested drugs and they were included to the neural network. The three-layer perceptron, that is trained by the back propagation of errors was successfully used for the supplementing of the incomplete original data matrix and also for the smoothing of the noisy biological data. The dependencies between the surface anaesthesia and the number of C atoms in the side alkoxy chain presented the peak character which is in agreement with the theoretical assumptions. Surface anaesthesia grew up in order from para to ortho position of alkoxy chain (para < meta < ortho) in individual homologous series. The azepanyl derivatives presented in average the highest surface anaesthesia. This fact is in agreement with the lowest polarity of azepanyl substituent.


2021 ◽  
Vol 13 (8) ◽  
pp. 1593
Author(s):  
Luca Cenci ◽  
Valerio Pampanoni ◽  
Giovanni Laneve ◽  
Carla Santella ◽  
Valentina Boccia

Developing reliable methodologies of data quality assessment is of paramount importance for maximizing the exploitation of Earth observation (EO) products. Among the different factors influencing EO optical image quality, sharpness has a relevant role. When implementing on-orbit approaches of sharpness assessment, such as the edge method, a crucial step that strongly affects the final results is the selection of suitable edges to use for the analysis. Within this context, this paper aims at proposing a semi-automatic, statistically-based edge method (SaSbEM) that exploits edges extracted from natural targets easily and largely available on Earth: agricultural fields. For each image that is analyzed, SaSbEM detects numerous suitable edges (e.g., dozens-hundreds) characterized by specific geometrical and statistical criteria. This guarantees the repeatability and reliability of the analysis. Then, it implements a standard edge method to assess the sharpness level of each edge. Finally, it performs a statistical analysis of the results to have a robust characterization of the image sharpness level and its uncertainty. The method was validated by using Landsat 8 L1T products. Results proved that: SaSbEM is capable of performing a reliable and repeatable sharpness assessment; Landsat 8 L1T data are characterized by very good sharpness performance.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pihua Han ◽  
Jingjun Zhu ◽  
Guang Feng ◽  
Zizhang Wang ◽  
Yanni Ding

Abstract Background Breast cancer (BRCA) is one of the most common cancers worldwide. Abnormal alternative splicing (AS) frequently observed in cancers. This study aims to demonstrate AS events and signatures that might serve as prognostic indicators for BRCA. Methods Original data for all seven types of splice events were obtained from TCGA SpliceSeq database. RNA-seq and clinical data of BRCA cohorts were downloaded from TCGA database. Survival-associated AS events in BRCA were analyzed by univariate COX proportional hazards regression model. Prognostic signatures were constructed for prognosis prediction in patients with BRCA based on survival-associated AS events. Pearson correlation analysis was performed to measure the correlation between the expression of splicing factors (SFs) and the percent spliced in (PSI) values of AS events. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) were conducted to demonstrate pathways in which survival-associated AS event is enriched. Results A total of 45,421 AS events in 21,232 genes were identified. Among them, 1121 AS events in 931 genes significantly correlated with survival for BRCA. The established AS prognostic signatures of seven types could accurately predict BRCA prognosis. The comprehensive AS signature could serve as independent prognostic factor for BRCA. A SF-AS regulatory network was therefore established based on the correlation between the expression levels of SFs and PSI values of AS events. Conclusions This study revealed survival-associated AS events and signatures that may help predict the survival outcomes of patients with BRCA. Additionally, the constructed SF-AS networks in BRCA can reveal the underlying regulatory mechanisms in BRCA.


2004 ◽  
Vol 34 (1) ◽  
pp. 37-52
Author(s):  
Wiktor Jassem ◽  
Waldemar Grygiel

The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added input information from another independent variable were also used, as well as matrices of the numerical data appropriately normalized. Mean scores for classification with respect to phonemes in a multi-speaker design in the testing sets were around 95%, and mean speaker-dependent scores for the phonemes varied between 86% and 100%, with two speakers scoring 100% correct. The individual voices were identified between 95% and 96% of the time, and classifications of the spectra for speaker gender were practically 100% correct.


2017 ◽  
Vol 41 (5) ◽  
Author(s):  
Thiago Yamada ◽  
Emerson Carlos Pedrino ◽  
João Juares Soares ◽  
Maria do Carmo Nicoletti

ABSTRACT It is well-known that conducting experimental research aiming the characterization of canopy structure of forests can be a difficult and costly task and, generally, requires an expert to extract, in loco, relevant information. Aiming at easing studies related to canopy structures, several techniques have been proposed in the literature and, among them, various are based on canopy digital image analysis. The research work described in this paper empirically compares two techniques that measure the integrity of the canopy structure of a forest fragment; one of them is based on central parts of canopy cover images and, the other, on canopy closure images. For the experiments, 22 central parts of canopy cover images and 22 canopy closure images were used. The images were captured along two transects: T1 (located in the conserved area) and T2 (located in the naturally disturbance area). The canopy digital images were computationally processed and analyzed using the MATLAB platform for the canopy cover images and the Gap Light Analyzer (GLA), for the canopy closure images. The results obtained using these two techniques showed that canopy cover images and, among the employed algorithms, the Jseg, characterize the canopy integrity best. It is worth mentioning that part of the analysis can be automatically conducted, as a quick and precise process, with low material costs involved.


2021 ◽  
Vol 10 (36) ◽  
pp. 104-107
Author(s):  
Mateus Silva Laranjeira ◽  
Marilisa Guimarães Lara ◽  
Marco Vinicius Chaud ◽  
Olney Leite Fontes ◽  
Antônio Riul Jr

Introduction: “Eletronic tongue” is a device commonly used in the analysis of tastants, heavy metal ions, fruit juice, wines and also in the development of biosensors [1-3]. Briefly, the e-tongue is constituted by sensing units formed by ultrathin films of distinct materials deposited on gold interdigitated electrodes, which are immersed in liquid samples, followed by impedance spectroscopy measurements [1]. The e-tongue sensor is based on the global selectivity concept, i.e., the materials forming the sensing units are not selective to any substance in the samples, therefore, it allows the grouping of information into distinct patterns of response, enabling the distinction of complex liquid systems [1]. Aim: Our aim was to use e-tongue system for the assessment the homeopathic medicine Belladonna at different degrees of dilution, in attempt to differentiate highly diluted systems. Methods: Ultrathin films forming the sensing units were prepared by the layer-by-layer technique [4], using conventional polyelectrolytes such as poly(sodium styene sulfonate) (PSS) and poly(allylamine) hydrochloride (PAH), chitosan and poly(3,4-ethylenedioxythiophene) (PEDOT). Homeopathic medicines (Belladonna 1cH, 6cH, 12cH and 30cH) were prepared by dilution and agitation according to Hahnemann´s method [5], using ethanol at 30% (w/w) as vehicle. Experimental data acquisition was conducted by blind tests measurements involving Belladonna samples and the vehicle used in the dilutions. Five independent and consecutive measurements were taken for each solution at 1 kHz, which were further analysed by Principal Component Analysis (PCA), a statistical method largely employed to reduce the dimensionality of the original data without losing information in the correlation of the samples [3]. Results: Figure 1 shows that the five independent measurements are grouped quite closed each other for each solution analysed, with a clear distinction of them. Therefore, it was noticed a change in the observed pattern measured at different days, indicating a reduced reproducibility, although the groups of data could still be identified. Discussion: PCA is a powerful tool highly employed to extract relevant information in the correlation of data analysis of e-tongue systems. PCA plots showed a good statistical correlation of the systems (PC1 + PC2 ³ 90%), with the solutions being straightforwardly distinguished each other and also from the vehicle used. Conclusion: Despite the differences of data obtained along distinct days of analysis, the e-tongue could detect differences among the samples tested, even considering the highly diluted cases studied.


Author(s):  
Bharat Singh ◽  
Om Prakash Vyas

Now a day's application deal with Big Data has tremendously been used in the popular areas. To tackle with such kind of data various approaches have been developed by researchers in the last few decades. A recent investigated techniques to factored the data matrix through a known latent factor in a lower size space is the so called matrix factorization. In addition, one of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, the authors have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. They have devised an algorithm for initializing the values of the decomposed matrix based on the PSO. In this paper, the auhtors have intended a genetic algorithm based technique while incorporating the nonnegative matrix factorization. Through the experimental result, they will show the proposed method converse very fast in comparison to other low rank approximation like simple NMF multiplicative, and ACLS technique.


2019 ◽  
Vol 51 (10) ◽  
pp. 981-988 ◽  
Author(s):  
Xiaolan Rao ◽  
Richard A Dixon

Abstract Co-expression network analysis is one of the most powerful approaches for interpretation of large transcriptomic datasets. It enables characterization of modules of co-expressed genes that may share biological functional linkages. Such networks provide an initial way to explore functional associations from gene expression profiling and can be applied to various aspects of plant biology. This review presents the applications of co-expression network analysis in plant biology and addresses optimized strategies from the recent literature for performing co-expression analysis on plant biological systems. Additionally, we describe the combined interpretation of co-expression analysis with other genomic data to enhance the generation of biologically relevant information.


2010 ◽  
Vol 34 (3) ◽  
pp. 327-335 ◽  
Author(s):  
Jaap Harlaar ◽  
Merel Brehm ◽  
Jules G. Becher ◽  
Daan J. J. Bregman ◽  
Jaap Buurke ◽  
...  

Ankle Foot Orthoses (AFOs) to promote walking ability are a common treatment in patients with neurological or muscular diseases. However, guidelines on the prescription of AFOs are currently based on a low level of evidence regarding their efficacy. Recent studies aiming to demonstrate the efficacy of wearing an AFO in respect to walking ability are not always conclusive. In this paper it is argued to recognize two levels of evidence related to the ICF levels. Activity level evidence expresses the gain in walking ability for the patient, while mechanical evidence expresses the correct functioning of the AFO. Used in combination for the purpose of evaluating the efficacy of orthotic treatment, a conjunct improvement at both levels reinforces the treatment algorithm that is used. Conversely, conflicting outcomes will challenge current treatment algorithms and the supposed working mechanism of the AFO. A treatment algorithm must use relevant information as an input, derived from measurements with a high precision. Its result will be a specific AFO that matches the patient's needs, specified by the mechanical characterization of the AFO footwear combination. It is concluded that research on the efficacy of AFOs should use parameters from two levels of evidence, to prove the efficacy of a treatment algorithm, i.e., how to prescribe a well-matched AFO.


Sign in / Sign up

Export Citation Format

Share Document