image context
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 3)

Author(s):  
Mohammed Shuaibu Badeggi ◽  
Habsah Muda

The approach to building university image deals with student trust, student satisfaction, and student loyalty. However, few studies are available on university image context and its role on student loyalty. University image is considered a vital component of the educational sectors' success in recent years. With the current increase in public and private universities across Malaysia, the academic industry has become increasingly competitive. The circumstance necessitates Universities to focus their energies on creating a positive corporate image by increasing student loyalty. Trust and satisfaction play an essential role in order to achieve and maintain student loyalty. The research also reviewed the relationship between student trust and student satisfaction to establish their relationship with student loyalty.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ali Riza Durmaz ◽  
Martin Müller ◽  
Bo Lei ◽  
Akhil Thomas ◽  
Dominik Britz ◽  
...  

AbstractAutomated, reliable, and objective microstructure inference from micrographs is essential for a comprehensive understanding of process-microstructure-property relations and tailored materials development. However, such inference, with the increasing complexity of microstructures, requires advanced segmentation methodologies. While deep learning offers new opportunities, an intuition about the required data quality/quantity and a methodological guideline for microstructure quantification is still missing. This, along with deep learning’s seemingly intransparent decision-making process, hampers its breakthrough in this field. We apply a multidisciplinary deep learning approach, devoting equal attention to specimen preparation and imaging, and train distinct U-Net architectures with 30–50 micrographs of different imaging modalities and electron backscatter diffraction-informed annotations. On the challenging task of lath-bainite segmentation in complex-phase steel, we achieve accuracies of 90% rivaling expert segmentations. Further, we discuss the impact of image context, pre-training with domain-extrinsic data, and data augmentation. Network visualization techniques demonstrate plausible model decisions based on grain boundary morphology.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yuqin Li ◽  
Ke Zhang ◽  
Weili Shi ◽  
Yu Miao ◽  
Zhengang Jiang

Medical image quality is highly relative to clinical diagnosis and treatment, leading to a popular research topic of medical image denoising. Image denoising based on deep learning methods has attracted considerable attention owing to its excellent ability of automatic feature extraction. Most existing methods for medical image denoising adapted to certain types of noise have difficulties in handling spatially varying noise; meanwhile, image detail losses and structure changes occurred in the denoised image. Considering image context perception and structure preserving, this paper firstly introduces a medical image denoising method based on conditional generative adversarial network (CGAN) for various unknown noises. In the proposed architecture, noise image with the corresponding gradient image is merged as network conditional information, which enhances the contrast between the original signal and noise according to the structural specificity. A novel generator with residual dense blocks makes full use of the relationship among convolutional layers to explore image context. Furthermore, the reconstruction loss and WGAN loss are combined as the objective loss function to ensure the consistency of denoised image and real image. A series of experiments for medical image denoising are conducted with the denoising results of PSNR = 33.2642 and SSIM = 0.9206 on JSRT datasets and PSNR = 35.1086 and SSIM = 0.9328 on LIDC datasets. Compared with the state-of-the-art methods, the superior performance of the proposed method is outstanding.


2021 ◽  
pp. 11-30
Author(s):  
Matthew Haysom
Keyword(s):  

2021 ◽  
Vol 1 ◽  
pp. 273
Author(s):  
Ilana Torres ◽  
Kathryn Slusarczyk ◽  
Malihe Alikhani ◽  
Matthew Stone

In image-text presentations from online discourse, pronouns can refer to entities depicted in images, even if these entities are not otherwise referred to in a text caption. While visual salience may be enough to allow a writer to use a pronoun to refer to a prominent entity in the image, coherence theory suggests that pronoun use is more restricted. Specifically, language users may need an appropriate coherence relation between text and imagery to license and resolve pronouns. To explore this hypothesis and better understand the relationship between image context and text interpretation, we annotated an image-text data set with coherence relations and pronoun information. We find that pronoun use reflects a complex interaction between the content of the pronoun, the grammar of the text, and the relation of text and image.


2021 ◽  
Vol 10 (8) ◽  
pp. 1635
Author(s):  
Joachim Krois ◽  
Lisa Schneider ◽  
Falk Schwendicke

Objectives: We aimed to assess the impact of image context information on the accuracy of deep learning models for tooth classification on panoramic dental radiographs. Methods: Our dataset contained 5008 panoramic radiographs with a mean number of 25.2 teeth per image. Teeth were segmented bounding-box-wise and classified by one expert; this was validated by another expert. Tooth segments were cropped allowing for different context; the baseline size was 100% of each box and was scaled up to capture 150%, 200%, 250% and 300% to increase context. On each of the five generated datasets, ResNet-34 classification models were trained using the Adam optimizer with a learning rate of 0.001 over 25 epochs with a batch size of 16. A total of 20% of the data was used for testing; in subgroup analyses, models were tested only on specific tooth types. Feature visualization using gradient-weighted class activation mapping (Grad-CAM) was employed to visualize salient areas. Results: F1-scores increased monotonically from 0.77 in the base-case (100%) to 0.93 on the largest segments (300%; p = 0.0083; Mann–Kendall-test). Gains in accuracy were limited between 200% and 300%. This behavior was found for all tooth types except canines, where accuracy was much higher even for smaller segments and increasing context yielded only minimal gains. With increasing context salient areas were more widely distributed over each segment; at maximum segment size, the models assessed minimum 3–4 teeth as well as the interdental or inter-arch space to come to a classification. Conclusions: Context matters; classification accuracy increased significantly with increasing context.


2021 ◽  
Author(s):  
Ali Durmaz ◽  
Martin Müller ◽  
Bo Lei ◽  
Akhil Thomas ◽  
Dominik Britz ◽  
...  

Abstract Automated, reliable, and objective microstructure inference from micrographs is an essential milestone towards a comprehensive understanding of process-microstructure-property relations and tailored materials development. However, such inference, with the increasing complexity of microstructures, requires advanced segmentation methodologies. While deep learning (DL), in principle, offers new opportunities for this task, an intuition about the required data quality and quantity and an extensive methodological DL guideline for microstructure quantification and classification are still missing. This, along with a lack of open-access data sets and the seemingly intransparent decision-making process of DL models, hampers its breakthrough in this field. We address all aforementioned obstacles by a multidisciplinary DL approach, devoting equal attention to specimen preparation, contrasting, and imaging. To this end, we train distinct U-Net architectures with 30–50 micrographs of different imaging modalities and corresponding EBSD-informed annotations. On the challenging task of lath-bainite segmentation in complex-phase steel, we achieve accuracies of 90% rivaling expert segmentations. Further, we discuss the impact of image context, pre-training with domain-extrinsic data, and data augmentation. Network visualization techniques demonstrate plausible model decisions based on grain boundary morphology and triple points. As a result, we resolve preconceptions about required data amounts and interpretability to pave the way for DL's day-to-day application for microstructure quantification.


2021 ◽  
Vol 11 (7) ◽  
pp. 3066
Author(s):  
Zhikang Fu ◽  
Jun Li ◽  
Guoqing Chen ◽  
Tianbao Yu ◽  
Tiansheng Deng

In the era of big data, massive harmful multimedia resources publicly available on the Internet greatly threaten children and adolescents. In particular, recognizing pornographic videos is of great importance for protecting the mental and physical health of the underage. In contrast to the conventional methods which are only built on image classifier without considering audio clues in the video, we propose a unified deep architecture termed PornNet integrating dual sub-networks for pornographic video recognition. More specifically, with image frames and audio clues extracted from the pornographic videos from scratch, they are respectively delivered to two deep networks for pattern discrimination. For discriminating pornographic frames, we propose a local-context aware network that takes into account the image context in capturing the key contents, whilst leveraging an attention network which can capture temporal information for recognizing pornographic audios. Thus, we incorporate the recognition scores generated from the two sub-networks into a unified deep architecture, while making use of a pre-defined aggregation function to produce the whole video recognition result. The experiments on our newly-collected large dataset demonstrate that our proposed method exhibits a promising performance, achieving an accuracy at 93.4% on the dataset including 1 k pornographic samples along with 1 k normal videos and 1 k sexy videos.


2021 ◽  
Vol 10 (3) ◽  
pp. 125
Author(s):  
Junqing Huang ◽  
Liguo Weng ◽  
Bingyu Chen ◽  
Min Xia

Analyzing land cover using remote sensing images has broad prospects, the precise segmentation of land cover is the key to the application of this technology. Nowadays, the Convolution Neural Network (CNN) is widely used in many image semantic segmentation tasks. However, existing CNN models often exhibit poor generalization ability and low segmentation accuracy when dealing with land cover segmentation tasks. To solve this problem, this paper proposes Dual Function Feature Aggregation Network (DFFAN). This method combines image context information, gathers image spatial information, and extracts and fuses features. DFFAN uses residual neural networks as backbone to obtain different dimensional feature information of remote sensing images through multiple downsamplings. This work designs Affinity Matrix Module (AMM) to obtain the context of each feature map and proposes Boundary Feature Fusion Module (BFF) to fuse the context information and spatial information of an image to determine the location distribution of each image’s category. Compared with existing methods, the proposed method is significantly improved in accuracy. Its mean intersection over union (MIoU) on the LandCover dataset reaches 84.81%.


Sign in / Sign up

Export Citation Format

Share Document