Structural displacement and strain monitoring based on the edge detection operator

2016 ◽  
Vol 20 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Wei Lu ◽  
Yan Cui ◽  
Jun Teng

To decrease the cost of instrumentation for the strain and displacement monitoring method that uses sensors as well as considers the structural health monitoring challenges in sensor installation, it is necessary to develop a machine vision-based monitoring method. For this method, the most important step is the accurate extraction of the image feature. In this article, the edge detection operator based on multi-scale structure elements and the compound mathematical morphological operator is proposed to provide improved image feature extraction. The proposed method can not only achieve an improved filtering effect and anti-noise ability but can also detect the edge more accurately. Furthermore, the required image features (vertex of a square calibration board and centroid of a circular target) can be accurately extracted using the extracted image edge information. For validation, the monitoring tests for the structural local mean strain and in-plane displacement were designed accordingly. Through analysis of the error between the measured and calculated values of the structural strain and displacement, the feasibility and effectiveness of the proposed edge detection operator are verified.

Author(s):  
W. Krakow ◽  
D. A. Smith

The successful determination of the atomic structure of [110] tilt boundaries in Au stems from the investigation of microscope performance at intermediate accelerating voltages (200 and 400kV) as well as a detailed understanding of how grain boundary image features depend on dynamical diffraction processes variation with specimen and beam orientations. This success is also facilitated by improving image quality by digital image processing techniques to the point where a structure image is obtained and each atom position is represented by a resolved image feature. Figure 1 shows an example of a low angle (∼10°) Σ = 129/[110] tilt boundary in a ∼250Å Au film, taken under tilted beam brightfield imaging conditions, to illustrate the steps necessary to obtain the atomic structure configuration from the image. The original image of Fig. 1a shows the regular arrangement of strain-field images associated with the cores of ½ [10] primary dislocations which are separated by ∼15Å.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2021 ◽  
Vol 193 (7) ◽  
Author(s):  
Heini Hyvärinen ◽  
Annaliina Skyttä ◽  
Susanna Jernberg ◽  
Kristian Meissner ◽  
Harri Kuosa ◽  
...  

AbstractGlobal deterioration of marine ecosystems, together with increasing pressure to use them, has created a demand for new, more efficient and cost-efficient monitoring tools that enable assessing changes in the status of marine ecosystems. However, demonstrating the cost-efficiency of a monitoring method is not straightforward as there are no generally applicable guidelines. Our study provides a systematic literature mapping of methods and criteria that have been proposed or used since the year 2000 to evaluate the cost-efficiency of marine monitoring methods. We aimed to investigate these methods but discovered that examples of actual cost-efficiency assessments in literature were rare, contradicting the prevalent use of the term “cost-efficiency.” We identified five different ways to compare the cost-efficiency of a marine monitoring method: (1) the cost–benefit ratio, (2) comparative studies based on an experiment, (3) comparative studies based on a literature review, (4) comparisons with other methods based on literature, and (5) subjective comparisons with other methods based on experience or intuition. Because of the observed high frequency of insufficient cost–benefit assessments, we strongly advise that more attention is paid to the coverage of both cost and efficiency parameters when evaluating the actual cost-efficiency of novel methods. Our results emphasize the need to improve the reliability and comparability of cost-efficiency assessments. We provide guidelines for future initiatives to develop a cost-efficiency assessment framework and suggestions for more unified cost-efficiency criteria.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.


2011 ◽  
Vol 2011 ◽  
pp. 1-14 ◽  
Author(s):  
Jinjun Li ◽  
Hong Zhao ◽  
Chengying Shi ◽  
Xiang Zhou

A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD) is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yang Zhang ◽  
Chaoyue Chen ◽  
Zerong Tian ◽  
Yangfan Cheng ◽  
Jianguo Xu

Objectives. To differentiate pituitary adenoma from Rathke cleft cyst in magnetic resonance (MR) scan by combing MR image features with texture features. Methods. A total number of 133 patients were included in this study, 83 with pituitary adenoma and 50 with Rathke cleft cyst. Qualitative MR image features and quantitative texture features were evaluated by using the chi-square tests or Mann–Whitney U test. Binary logistic regression analysis was conducted to investigate their ability as independent predictors. ROC analysis was conducted subsequently on the independent predictors to assess their practical value in discrimination and was used to investigate the association between two types of features. Results. Signal intensity on the contrast-enhanced image was found to be the only significantly different MR image feature between the two lesions. Two texture features from the contrast-enhanced images (Histo-Skewness and GLCM-Correlation) were found to be the independent predictors in discrimination, of which AUC values were 0.80 and 0.75, respectively. Besides, the above two texture features (Histo-Skewness and GLCM-Contrast) were suggested to be associated with signal intensity on the contrast-enhanced image. Conclusion. Signal intensity on the contrast-enhanced image was the most significant MR image feature in differentiation between pituitary adenoma and Rathke cleft cyst, and texture features also showed promising and practical ability in discrimination. Moreover, two types of features could be coordinated with each other.


Panorama development is the basically method of integrating multiple images captured of the same scene under consideration to get high resolution image. This process is useful for combining multiple images which are overlapped to obtain larger image. Usefulness of Image stitching is found in the field related to medical imaging, data from satellites, computer vision and automatic target recognition in military applications. The goal objective of this research paper is basically for developing an high improved resolution and its quality panorama having with high accuracy and minimum computation time. Initially we compared different image feature detectors and tested SIFT, SURF, ORB to find out the rate of detection of the corrected available key points along with processing time. Later on, testing is done with some common techniques of image blending or fusion for improving the mosaicing quality process. In this experimental results, it has been found out that ORB image feature detection and description algorithm is more accurate, fastest which gives a higher performance and Pyramid blending method gives the better stitching quality. Lastly panorama is developed based on combination of ORB binary descriptor method for finding out image features and pyramid blending method.


Author(s):  
Siyuan Lu ◽  
Di Wu ◽  
Zheng Zhang ◽  
Shui-Hua Wang

The new coronavirus COVID-19 has been spreading all over the world in the last six months, and the death toll is still rising. The accurate diagnosis of COVID-19 is an emergent task as to stop the spreading of the virus. In this paper, we proposed to leverage image feature fusion for the diagnosis of COVID-19 in lung window computed tomography (CT). Initially, ResNet-18 and ResNet-50 were selected as the backbone deep networks to generate corresponding image representations from the CT images. Second, the representative information extracted from the two networks was fused by discriminant correlation analysis to obtain refined image features. Third, three randomized neural networks (RNNs): extreme learning machine, Schmidt neural network and random vector functional-link net, were trained using the refined features, and the predictions of the three RNNs were ensembled to get a more robust classification performance. Experiment results based on five-fold cross validation suggested that our method outperformed state-of-the-art algorithms in the diagnosis of COVID-19.


2021 ◽  
Vol 32 (4) ◽  
pp. 1-13
Author(s):  
Xia Feng ◽  
Zhiyi Hu ◽  
Caihua Liu ◽  
W. H. Ip ◽  
Huiying Chen

In recent years, deep learning has achieved remarkable results in the text-image retrieval task. However, only global image features are considered, and the vital local information is ignored. This results in a failure to match the text well. Considering that object-level image features can help the matching between text and image, this article proposes a text-image retrieval method that fuses salient image feature representation. Fusion of salient features at the object level can improve the understanding of image semantics and thus improve the performance of text-image retrieval. The experimental results show that the method proposed in the paper is comparable to the latest methods, and the recall rate of some retrieval results is better than the current work.


Sign in / Sign up

Export Citation Format

Share Document