natural image
Recently Published Documents


TOTAL DOCUMENTS

701
(FIVE YEARS 198)

H-INDEX

45
(FIVE YEARS 6)

Author(s):  
Paula R. Villamayor ◽  
Julián Gullón ◽  
Uxía Yáñez ◽  
María Sánchez ◽  
Pablo Sánchez-Quinteiro ◽  
...  

Biostimulation is an animal management practice that helps improve reproductive parameters by modulating animal sensory systems. Chemical signals, mostly known as pheromones, have a great potential in this regard. This study was conducted to determine the influence of short-term female rabbit exposure to different conditions, mainly pheromone-mediated, on reproductive parameters of inseminated does. Groups of 60 females/each were exposed to 1) female urine, 2) male urine, 3) seminal plasma and 4) female-female interaction, just before artificial insemination, and compared to isolated females controls (female-female separated). The following reproductive parameters were analyzed for each group: receptivity (vulvar color), fertility (calving rate), prolificacy and number of born alive and dead kits ⁄ litter. Our results showed that the biostimulation methods employed in this experiment did not significantly improve any of the analyzed parameters. However, female doe exposure to urine, especially to male urine, slightly increased fertility levels when compared to the rest of the experimental conditions. Female-female interaction before artificial insemination, which is a common practice in rabbit farms, did not have any effect, which suggests its removal to avoid unnecessary animal management and time cost. On the other hand, fertility ranges were lower for animals with pale vulvar color whereas no differences were noticed among the other three colours which measure receptivity (pink, red, purple), thus suggesting that these three colours could be grouped together. Additionally, equine chorionic gonadotropin injection could be replaced with various biostimulation methods, therefore reducing or replacing current hormonal treatments, and contributing to animal welfare and to a natural image of animal production.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 380
Author(s):  
Ha-Yeong Yoon ◽  
Jung-Hwa Kim ◽  
Jin-Woo Jeong

The demand for wheelchairs has increased recently as the population of the elderly and patients with disorders increases. However, society still pays less attention to infrastructure that can threaten the wheelchair user, such as sidewalks with cracks/potholes. Although various studies have been proposed to recognize such challenges, they mainly depend on RGB images or IMU sensors, which are sensitive to outdoor conditions such as low illumination, bad weather, and unavoidable vibrations, resulting in unsatisfactory and unstable performance. In this paper, we introduce a novel system based on various convolutional neural networks (CNNs) to automatically classify the condition of sidewalks using images captured with depth and infrared modalities. Moreover, we compare the performance of training CNNs from scratch and the transfer learning approach, where the weights learned from the natural image domain (e.g., ImageNet) are fine-tuned to the depth and infrared image domain. In particular, we propose applying the ResNet-152 model pre-trained with self-supervised learning during transfer learning to leverage better image representations. Performance evaluation on the classification of the sidewalk condition was conducted with 100% and 10% of training data. The experimental results validate the effectiveness and feasibility of the proposed approach and bring future research directions.


2021 ◽  
Vol 14 (1) ◽  
pp. 102
Author(s):  
Xin Li ◽  
Tao Li ◽  
Ziqi Chen ◽  
Kaiwen Zhang ◽  
Runliang Xia

Semantic segmentation has been a fundamental task in interpreting remote sensing imagery (RSI) for various downstream applications. Due to the high intra-class variants and inter-class similarities, inflexibly transferring natural image-specific networks to RSI is inadvisable. To enhance the distinguishability of learnt representations, attention modules were developed and applied to RSI, resulting in satisfactory improvements. However, these designs capture contextual information by equally handling all the pixels regardless of whether they around edges. Therefore, blurry boundaries are generated, rising high uncertainties in classifying vast adjacent pixels. Hereby, we propose an edge distribution attention module (EDA) to highlight the edge distributions of leant feature maps in a self-attentive fashion. In this module, we first formulate and model column-wise and row-wise edge attention maps based on covariance matrix analysis. Furthermore, a hybrid attention module (HAM) that emphasizes the edge distributions and position-wise dependencies is devised combing with non-local block. Consequently, a conceptually end-to-end neural network, termed as EDENet, is proposed to integrate HAM hierarchically for the detailed strengthening of multi-level representations. EDENet implicitly learns representative and discriminative features, providing available and reasonable cues for dense prediction. The experimental results evaluated on ISPRS Vaihingen, Potsdam and DeepGlobe datasets show the efficacy and superiority to the state-of-the-art methods on overall accuracy (OA) and mean intersection over union (mIoU). In addition, the ablation study further validates the effects of EDA.


2021 ◽  
Vol 15 ◽  
Author(s):  
Zarina Rakhimberdina ◽  
Quentin Jodelet ◽  
Xin Liu ◽  
Tsuyoshi Murata

With the advent of brain imaging techniques and machine learning tools, much effort has been devoted to building computational models to capture the encoding of visual information in the human brain. One of the most challenging brain decoding tasks is the accurate reconstruction of the perceived natural images from brain activities measured by functional magnetic resonance imaging (fMRI). In this work, we survey the most recent deep learning methods for natural image reconstruction from fMRI. We examine these methods in terms of architectural design, benchmark datasets, and evaluation metrics and present a fair performance evaluation across standardized evaluation metrics. Finally, we discuss the strengths and limitations of existing studies and present potential future directions.


2021 ◽  
Vol 13 (24) ◽  
pp. 5111
Author(s):  
Zhen Shu ◽  
Xiangyun Hu ◽  
Hengming Dai

Accurate building extraction from remotely sensed images is essential for topographic mapping, cadastral surveying and many other applications. Fully automatic segmentation methods still remain a great challenge due to the poor generalization ability and the inaccurate segmentation results. In this work, we are committed to robust click-based interactive building extraction in remote sensing imagery. We argue that stability is vital to an interactive segmentation system, and we observe that the distance of the newly added click to the boundaries of the previous segmentation mask contains progress guidance information of the interactive segmentation process. To promote the robustness of the interactive segmentation, we exploit this information with the previous segmentation mask, positive and negative clicks to form a progress guidance map, and feed it to a convolutional neural network (CNN) with the original RGB image, we name the network as PGR-Net. In addition, an adaptive zoom-in strategy and an iterative training scheme are proposed to further promote the stability of PGR-Net. Compared with the latest methods FCA and f-BRS, the proposed PGR-Net basically requires 1–2 fewer clicks to achieve the same segmentation results. Comprehensive experiments have demonstrated that the PGR-Net outperforms related state-of-the-art methods on five natural image datasets and three building datasets of remote sensing images.


Iproceedings ◽  
10.2196/35437 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e35437
Author(s):  
Raluca Jalaboi ◽  
Mauricio Orbes Arteaga ◽  
Dan Richter Jørgensen ◽  
Ionela Manole ◽  
Oana Ionescu Bozdog ◽  
...  

Background Convolutional neural networks (CNNs) are regarded as state-of-the-art artificial intelligence (AI) tools for dermatological diagnosis, and they have been shown to achieve expert-level performance when trained on a representative dataset. CNN explainability is a key factor to adopting such techniques in practice and can be achieved using attention maps of the network. However, evaluation of CNN explainability has been limited to visual assessment and remains qualitative, subjective, and time consuming. Objective This study aimed to provide a framework for an objective quantitative assessment of the explainability of CNNs for dermatological diagnosis benchmarks. Methods We sourced 566 images available under the Creative Commons license from two public datasets—DermNet NZ and SD-260, with reference diagnoses of acne, actinic keratosis, psoriasis, seborrheic dermatitis, viral warts, and vitiligo. Eight dermatologists with teledermatology expertise annotated each clinical image with a diagnosis, as well as diagnosis-supporting characteristics and their localization. A total of 16 supporting visual characteristics were selected, including basic terms such as macule, nodule, papule, patch, plaque, pustule, and scale, and additional terms such as closed comedo, cyst, dermatoglyphic disruption, leukotrichia, open comedo, scar, sun damage, telangiectasia, and thrombosed capillary. The resulting dataset consisted of 525 images with three rater annotations for each. Explainability of two fine-tuned CNN models, ResNet-50 and EfficientNet-B4, was analyzed with respect to the reference explanations provided by the dermatologists. Both models were pretrained on the ImageNet natural image recognition dataset and fine-tuned using 3214 images of the six target skin conditions obtained from an internal clinical dataset. CNN explanations were obtained as activation maps of the models through gradient-weighted class-activation maps. We computed the fuzzy sensitivity and specificity of each characteristic attention map with regard to both the fuzzy gold standard characteristic attention fusion masks and the fuzzy union of all characteristics. Results On average, explainability of EfficientNet-B4 was higher than that of ResNet-50 in terms of sensitivity for 13 of 16 supporting characteristics, with mean values of 0.24 (SD 0.07) and 0.16 (SD 0.05), respectively. However, explainability was lower in terms of specificity, with mean values of 0.82 (SD 0.03) and 0.90 (SD 0.00) for EfficientNet-B4 and ResNet-50, respectively. All measures were within the range of corresponding interrater metrics. Conclusions We objectively benchmarked the explainability power of dermatological diagnosis models through the use of expert-defined supporting characteristics for diagnosis. Acknowledgments This work was supported in part by the Danish Innovation Fund under Grant 0153-00154A. Conflict of Interest None declared.


2021 ◽  
Author(s):  
Raluca Jalaboi ◽  
Mauricio Orbes Arteaga ◽  
Dan Richter Jørgensen ◽  
Ionela Manole ◽  
Oana Ionescu Bozdog ◽  
...  

BACKGROUND Convolutional neural networks (CNNs) are regarded as state-of-the-art artificial intelligence (AI) tools for dermatological diagnosis, and they have been shown to achieve expert-level performance when trained on a representative dataset. CNN explainability is a key factor to adopting such techniques in practice and can be achieved using attention maps of the network. However, evaluation of CNN explainability has been limited to visual assessment and remains qualitative, subjective, and time consuming. OBJECTIVE This study aimed to provide a framework for an objective quantitative assessment of the explainability of CNNs for dermatological diagnosis benchmarks. METHODS We sourced 566 images available under the Creative Commons license from two public datasets—DermNet NZ and SD-260, with reference diagnoses of acne, actinic keratosis, psoriasis, seborrheic dermatitis, viral warts, and vitiligo. Eight dermatologists with teledermatology expertise annotated each clinical image with a diagnosis, as well as diagnosis-supporting characteristics and their localization. A total of 16 supporting visual characteristics were selected, including basic terms such as <i>macule, nodule, papule, patch, plaque, pustule,</i> and <i>scale</i>, and additional terms such as <i>closed comedo, cyst, dermatoglyphic disruption, leukotrichia, open comedo, scar, sun damage, telangiectasia</i>, and <i>thrombosed capillary</i>. The resulting dataset consisted of 525 images with three rater annotations for each. Explainability of two fine-tuned CNN models, ResNet-50 and EfficientNet-B4, was analyzed with respect to the reference explanations provided by the dermatologists. Both models were pretrained on the ImageNet natural image recognition dataset and fine-tuned using 3214 images of the six target skin conditions obtained from an internal clinical dataset. CNN explanations were obtained as activation maps of the models through gradient-weighted class-activation maps. We computed the fuzzy sensitivity and specificity of each characteristic attention map with regard to both the fuzzy gold standard characteristic attention fusion masks and the fuzzy union of all characteristics. RESULTS On average, explainability of EfficientNet-B4 was higher than that of ResNet-50 in terms of sensitivity for 13 of 16 supporting characteristics, with mean values of 0.24 (SD 0.07) and 0.16 (SD 0.05), respectively. However, explainability was lower in terms of specificity, with mean values of 0.82 (SD 0.03) and 0.90 (SD 0.00) for EfficientNet-B4 and ResNet-50, respectively. All measures were within the range of corresponding interrater metrics. CONCLUSIONS We objectively benchmarked the explainability power of dermatological diagnosis models through the use of expert-defined supporting characteristics for diagnosis.


2021 ◽  
Author(s):  
David St-Amand ◽  
Curtis L Baker

Neurons in the primary visual cortex (V1) receive excitation and inhibition from two different pathways processing lightness (ON) and darkness (OFF). V1 neurons overall respond more strongly to dark than light stimuli (Yeh, Xing and Shapley, 2010; Kremkow et al., 2014), consistent with a preponderance of darker regions in natural images (Ratliff et al., 2010), as well as human psychophysics (Buchner & Baumgartner, 2007). However, it has been unclear whether this "dark-dominance" is due to more excitation from the OFF pathway (Jin et al., 2008) or more inhibition from the ON pathway (Taylor et al., 2018). To understand the mechanisms behind dark-dominance, we record electrophysiological responses of individual simple-type V1 neurons to natural image stimuli and then train biologically inspired convolutional neural networks to predict the neuronal responses. Analyzing a sample of 74 neurons (in anesthetized, paralyzed cats) has revealed their responses to be more driven by dark than light stimuli, consistent with previous investigations (Yeh et al., 2010; Kremkow et al., 2013). We show this asymmetry to be predominantly due to slower inhibition to dark stimuli rather than by stronger excitation from the thalamocortical OFF pathway. Consistent with dark-dominant neurons having faster responses than light-dominant neurons (Komban et al., 2014), we find dark-dominance to solely occur in the early latencies of neuronal responses. Neurons that are strongly dark-dominated also tend to be less orientation selective. This novel approach gives us new insight into the dark-dominance phenomenon and provides an avenue to address new questions about excitatory and inhibitory integration in cortical neurons.


Author(s):  
Ting-Yu Lin ◽  
Jen-Shiun Chiang ◽  
Cheng-En Wei ◽  
Yu-Shian Lin

Sign in / Sign up

Export Citation Format

Share Document