scholarly journals Co-emergence of multi-scale cortical activities of irregular firing, oscillations and avalanches achieves cost-efficient information capacity

2017 ◽  
Vol 13 (2) ◽  
pp. e1005384 ◽  
Author(s):  
Dong-Ping Yang ◽  
Hai-Jun Zhou ◽  
Changsong Zhou
Author(s):  
K Ramakrishna Kini ◽  
Muddu Madakyaru

AbstractThe task of fault detection is crucial in modern chemical industries for improved product quality and process safety. In this regard, data-driven fault detection (FD) strategy based on independent component analysis (ICA) has gained attention since it improves monitoring by capturing non-gaussian features in the process data. However, presence of measurement noise in the process data degrades performance of the FD strategy since the noise masks important information. To enhance the monitoring under noisy environment, wavelet-based multi-scale filtering is integrated with the ICA model to yield a novel multi-scale Independent component analysis (MSICA) FD strategy. One of the challenges in multi-scale ICA modeling is to choose the optimum decomposition depth. A novel scheme based on ICA model parameter estimation at each depth is proposed in this paper to achieve this. The effectiveness of the proposed MSICA-based FD strategy is illustrated through three case studies, namely: dynamic multi-variate process, quadruple tank process and distillation column process. In each case study, the performance of the MSICA FD strategy is assessed for different noise levels by comparing it with the conventional FD strategies. The results indicate that the proposed MSICA FD strategy can enhance performance for higher levels of noise in the data since multi-scale wavelet-based filtering is able to de-noise and capture efficient information from noisy process data.


2021 ◽  
Vol 16 (1) ◽  
pp. 71-94
Author(s):  
Hairi Karim ◽  
Alias Abdul Rahman ◽  
Suhaibah Azri ◽  
Zurairah Halim

The CityGML model is now the norm for smart city or digital twin city development for better planning, management, risk-related modelling and other applications. CityGML comes with five levels of detail (LoD), mainly constructed from point cloud measurements and images of several systems, resulting in a variety of accuracies and detailed models. The LoDs, also known as pre-defined multi-scale models, require large storage-memory-graphic consumption compared to single scale models. Furthermore, these multi-scales have redundancy in geometries, attributes, are costly in terms of time and workload in updating tasks, and are difficult to view in a single viewer. It is essential for data owners to engage with a suitable multi-scale spatial management solution in minimizes the drawbacks of the current implementation. The proper construction, control and management of multi-scale models are needed to encourage and expedite data sharing among data owners, agencies, stakeholders and public users for efficient information retrieval and analyses. This paper discusses the construction of the CityGML model with different LoDs using several datasets. A scale unique ID is introduced to connect all respective LoDs for cross-LoD information queries within a single viewer. The paper also highlights the benefits of intermediate outputs and limitations of the proposed solution, as well as suggestions for the future.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Zhi Geng ◽  
Yanfei Wang

Abstract Geoscientists mainly identify subsurface geologic features using exploration-derived seismic data. Classification or segmentation of 2D/3D seismic images commonly relies on conventional deep learning methods for image recognition. However, complex reflections of seismic waves tend to form high-dimensional and multi-scale signals, making traditional convolutional neural networks (CNNs) computationally costly. Here we propose a highly efficient and resource-saving CNN architecture (SeismicPatchNet) with topological modules and multi-scale-feature fusion units for classifying seismic data, which was discovered by an automated data-driven search strategy. The storage volume of the architecture parameters (0.73 M) is only ~2.7 MB, ~0.5% of the well-known VGG-16 architecture. SeismicPatchNet predicts nearly 18 times faster than ResNet-50 and shows an overwhelming advantage in identifying Bottom Simulating Reflection (BSR), an indicator of marine gas-hydrate resources. Saliency mapping demonstrated that our architecture captured key features well. These results suggest the prospect of end-to-end interpretation of multiple seismic datasets at extremely low computational cost.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Zhi Geng ◽  
Yanfei Wang

An amendment to this paper has been published and can be accessed via a link at the top of the paper.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Yuhao Chen ◽  
Alexander Wong ◽  
Yuan Fang ◽  
Yifan Wu ◽  
Linlin Xu

Multi-scale image decomposition (MID) is a fundamental task in computer vision and image processing that involves the transformation of an image into a hierarchical representation comprising of different levels of visual granularity from coarse structures to fine details. A well-engineered MID disentangles the image signal into meaningful components which can be used in a variety of applications such as image denoising, image compression, and object classification. Traditional MID approaches such as wavelet transforms tackle the problem through carefully designed basis functions under rigid decomposition structure assumptions. However, as the information distribution varies from one type of image content to another, rigid decomposition assumptions lead to inefficiently representation, i.e., some scales can contain little to no information. To address this issue, we present Deep Residual Transform (DRT), a data-driven MID strategy where the input signal is transformed into a hierarchy of non-linear representations at different scales, with each representation being independently learned as the representational residual of previous scales at a user-controlled detail level. As such, the proposed DRT progressively disentangles scale information from the original signal by sequentially learning residual representations. The decomposition flexibility of this approach allows for highly tailored representations cater to specific types of image content, and results in greater representational efficiency and compactness. In this study, we realize the proposed transform by leveraging a hierarchy of sequentially trained autoencoders. To explore the efficacy of the proposed DRT, we leverage two datasets comprising of very different types of image content: 1) CelebFaces and 2) Cityscapes. Experimental results show that the proposed DRT achieved highly efficient information decomposition on both datasets amid their very different visual granularity characteristics.


Author(s):  
D. C. Joy ◽  
R. D. Bunn

The information available from an SEM image is limited both by the inherent signal to noise ratio that characterizes the image and as a result of the transformations that it may undergo as it is passed through the amplifying circuits of the instrument. In applications such as Critical Dimension Metrology it is necessary to be able to quantify these limitations in order to be able to assess the likely precision of any measurement made with the microscope.The information capacity of an SEM signal, defined as the minimum number of bits needed to encode the output signal, depends on the signal to noise ratio of the image - which in turn depends on the probe size and source brightness and acquisition time per pixel - and on the efficiency of the specimen in producing the signal that is being observed. A detailed analysis of the secondary electron case shows that the information capacity C (bits/pixel) of the SEM signal channel could be written as :


Sign in / Sign up

Export Citation Format

Share Document