focal area
Recently Published Documents


TOTAL DOCUMENTS

122
(FIVE YEARS 30)

H-INDEX

17
(FIVE YEARS 3)

Author(s):  
P.-G. Bleotu ◽  
J. Wheeler ◽  
D. Papadopoulos ◽  
M. Chabanis ◽  
J. Prudent ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7371
Author(s):  
Jiyoung Lee ◽  
Seunghyun Jang ◽  
Jungbin Lee ◽  
Taehan Kim ◽  
Seonghan Kim ◽  
...  

The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of the microscope. The microscope acquires multiple images with the axial translation of focus, and the image stack must be processed. Thus, we propose a multi-focus image fusion method to generate an all-in-focus image from multiple microscopic images. First, a bandpass filter is applied to the source images and the focus areas are extracted using Laplacian transformation and thresholding with a morphological operation. Next, a self-adjusting guided filter is applied for the natural connections between local focus images. A window-size-updating method is adopted in the guided filter to reduce the number of parameters. This paper presents a novel algorithm that can operate for a large quantity of images (10 or more) and obtain an all-in-focus image. To quantitatively evaluate the proposed method, two different types of evaluation metrics are used: “full-reference” and “no-reference”. The experimental results demonstrate that this algorithm is robust to noise and capable of preserving local focus information through focal area extraction. Additionally, the proposed method outperforms state-of-the-art approaches in terms of both visual effects and image quality assessments.


2021 ◽  
Vol 6 (6) ◽  
pp. 064402
Author(s):  
K. Burdonov ◽  
A. Fazzini ◽  
V. Lelasseux ◽  
J. Albrecht ◽  
P. Antici ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Yoshihiro Nakashima ◽  
Shun Hongo ◽  
Kaori Mizuno ◽  
Gota Yajima ◽  
Zeun’s C.B. Dzefck

AbstractCamera traps are a powerful research tool with a wide range of applications in animal ecology, conservation, and management. However, camera traps may not always detect animals passing in front, and the probability of successfully detecting animals (i.e. camera sensitivity) may vary spatially and temporarily. This constraint may create a substantial bias in estimating critical parameters, such as the density of unmarked populations or animal activity levels.We applied the ‘double-observer approach’ to estimate detection probability and correct potentially imperfect detection. This involved two camera traps being set up at a camera station to monitor the same focal area. The detection probability and the number of animal passes were concurrently estimated with a hierarchal capture-recapture model for stratified populations using a Bayesian framework. Monte Carlo simulations were performed to test the reliability. We then estimated the detection probabilities of a camera model (Browning Strike Force Pro) within an equilateral-triangle focal area (1.56 m2) for 12 ground-dwelling mammals in Japan and Cameroon. We also evaluated the possible difference in detection probability between daytime and nighttime by incorporating it as a covariate.We found that the double-observer approach reliably quantifies camera sensitivity and provides unbiased estimates of the number of animal passes, even when the detection probability varies among animal passes or camera stations. The camera sensitivity did not change between daytime and nighttime either in Japan or Cameroon, providing the first evidence that the number of animal passes per unit time may be a viable index of animal activity levels. Nonetheless, the camera traps missed animals within the focal area by 4 %–36%. Current density estimation models relying on perfect detection may underestimate animal density by the same order of magnitude.Our results showed that the double-observer approach might be effective in correcting imperfect camera sensitivity. The hierarchical capture-recapture model used here can estimate the distribution of detection probability and the number of animals passing concurrently, and thus, it is easily incorporated in the current density estimation models. We believe that this approach could make a wide range of camera-trapping studies more accurate.


2021 ◽  
Vol 333 ◽  
pp. 02011
Author(s):  
Tatyana Rubleva ◽  
Konstantin Simonov ◽  
Valentin Kashkin ◽  
Anna Malkanova ◽  
Roman Odintsov

The aim of this work is to study gravitational anomalies that have arisen in the region of the sources of strong underwater earthquakes with a magnitude of Mw > 8. For this purpose, data obtained by the GRACE space system were used. Variations of the EWH program with a period of 30 days were investigated relative to the focal area of the 2011 Japanese earthquake for the period 2010-2012. It was found that during the preparation of an earthquake, the EWH values significantly increase in this area for three months, with aftershock activity, the EWH values decrease within a month. Maps of variations of the EWH parameter in the conditions of a disturbed geomedia and in background seismic conditions are constructed. The indices of the anomaly δEWH were calculated, which made it possible to analyze in more detail the local gravitational field for the investigated focal zone.


Author(s):  
Dorottya Osváth

This paper is related to research on language use on the Internet and gender linguistics. It briefly describes an online questionnaire attitude survey conducted in November 2020. In this questionnaire, it was examined whether women and men communicate differently in the discourse-type called chat, in the opinions of informants who filled in the questionnaire. This main research question was addressed by the overall research in several different ways. One focal area was the use of emoticons. In the study I present the results of one task from the questionnaire that asked informants to classify twelve emoticons as feminine, masculine, or neutral without any context. Therefore, classification had to be performed based on the way the emoticons were represented. The twelve emoticons were shown to the informants in a picture attached to the task. According to the results some tendencies can be identified in the visual appearance of emoticons which can imply feminine, masculine, or neutral qualification even without context. But these are only general statements whose contextual validity is shaped by certain factors. For instance, the nature of the relationship between two communicating parties can affect what emoticons are used, regardless of the gender of the parties.


2021 ◽  
Vol 34 (0) ◽  
pp. 1-14
Author(s):  
Yunpeng Zhang ◽  
◽  
Weitao Wang ◽  
Wei Yang ◽  
Min Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document