image set
Recently Published Documents


TOTAL DOCUMENTS

456
(FIVE YEARS 175)

H-INDEX

28
(FIVE YEARS 7)

Micromachines ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 79
Author(s):  
Greta Ionela Barbulescu ◽  
Taddeus Paul Buica ◽  
Iacob Daniel Goje ◽  
Florina Maria Bojin ◽  
Valentin Laurentiu Ordodi ◽  
...  

Whole organ decellularization techniques have facilitated the fabrication of extracellular matrices (ECMs) for engineering new organs. Unfortunately, there is no objective gold standard evaluation of the scaffold without applying a destructive method such as histological analysis or DNA removal quantification of the dry tissue. Our proposal is a software application using deep convolutional neural networks (DCNN) to distinguish between different stages of decellularization, determining the exact moment of completion. Hearts from male Sprague Dawley rats (n = 10) were decellularized using 1% sodium dodecyl sulfate (SDS) in a modified Langendorff device in the presence of an alternating rectangular electric field. Spectrophotometric measurements of deoxyribonucleic acid (DNA) and total proteins concentration from the decellularization solution were taken every 30 min. A monitoring system supervised the sessions, collecting a large number of photos saved in corresponding folders. This system aimed to prove a strong correlation between the data gathered by spectrophotometry and the state of the heart that could be visualized with an OpenCV-based spectrometer. A decellularization completion metric was built using a DCNN based classifier model trained using an image set comprising thousands of photos. Optimizing the decellularization process using a machine learning approach launches exponential progress in tissue bioengineering research.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Risako Shirai ◽  
Katsumi Watanabe

Scientists conducting affective research often use visual, emotional images, to examine the mechanisms of defensive responses to threatening and dangerous events and objects. Many studies use the rich emotional images from the International Affective Picture System (IAPS) to facilitate affective research. While IAPS images can be classified into emotional categories such as fear or disgust, the number of images per discrete emotional category is limited. We developed the Open Biological Negative Image Set (OBNIS) consisting of 200 colour and greyscale creature images categorized as disgusting, fearful or neither. Participants in Experiment 1 ( N = 210) evaluated the images' valence and arousal and classified them as disgusting , fearful or neither. In Experiment 2, other participants ( N = 423) rated the disgust and fear levels of the images. As a result, the OBNIS provides valence, arousal, disgust and fear ratings and ‘disgusting,’ 'fearful' and ‘neither’ emotional categories for each image. These images are available to download on the Internet ( https://osf.io/pfrx4/?view_only=911b1be722074ad4aab87791cb8a72f5 ).


Author(s):  
Anchal Kumawat ◽  
Sucheta Panda

Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Kolten Kersey ◽  
Andrew Gonzalez

Background and Objective:  As technology is integrated further into medicine, more specialties are discovering new uses for it in their clinical practice. However, the tasks that we want technology to complete are often removed from developer’s intended tasks.  A field of research is growing that integrates medicine with current AI technology to bridge the gap and utilize already existing technology for medical uses.  We desire to use an active learning pipeline (a form of machine learning) to automate the labeling of blood vessels on angiograms and potentially develop the ability to detect occlusions. By using machine learning, it would essentially allow the machine to teach itself with human guidance.      Methods:  A machine learning pipeline is in development for automation of the process.  To create a baseline for the machine to start learning, the first set of angiograms are being labeled by hand using the program 3D Slicer.  For the first pass, we have been quickly labeling the blood vessels by changing the color sensitivity threshold to highlight the darker blood vessels juxtaposed next to lighter tissue.  For the second pass, we have erased any erroneous highlighting that was picked up in the first pass such as tools, tissue, contrast outside the injection site, and sutures.  For the third pass, we have labeled and segmented the arteries into specific vessels such as femoral, common iliac, internal iliac, etc. This will then be entered into the machine for automated learning.    Results:  We are in the process of labeling the initial image set.      Potential Impact:   By creating a lab for angiogram automation, it will allow physicians to efficiently search images for specific arteries and save valuable time usually spent searching images.  This would also allow for automated labeling of occlusions that a physician could then look at to verify.     


Minerals ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1354
Author(s):  
Liqin Jia ◽  
Mei Yang ◽  
Fang Meng ◽  
Mingyue He ◽  
Hongmin Liu

Mineral recognition is of importance in geological research. Traditional mineral recognition methods need professional knowledge or special equipment, are susceptible to human experience, and are inconvenient to carry in some conditions such as in the wild. The development of computer vision provides a possibility for convenient, fast, and intelligent mineral recognition. Recently, several mineral recognition methods based on images using a neural network have been proposed for this aim. However, these methods do not exploit features extracted from the backbone network or available information of the samples in the mineral dataset sufficiently, resulting in low recognition accuracy. In this paper, a method based on feature fusion and online hard sample mining is proposed to improve recognition accuracy by using only mineral photo images. This method first fuses multi-resolution features extracted from ResNet-50 to obtain comprehensive information of mineral photos, and then proposes the weighted top-k loss to emphasize the learning of hard samples. Based on a dataset consisting of 14,986 images of 22 common minerals, the proposed method with 10-fold cross-validation achieves a Top1 accuracy of 88.01% on the validation image set, surpassing those of Inception-v3 and EfficientNet-B0 by a margin of 1.88% and 1.29%, respectively, which demonstrates the good prospect of the proposed method for convenient and reliable mineral recognition using mineral photos only.


2021 ◽  
Vol 11 (23) ◽  
pp. 11229
Author(s):  
Sung-Sik Park ◽  
Van-Than Tran ◽  
Dong-Eun Lee

Pothole repair is one of the paramount tasks in road maintenance. Effective road surface monitoring is an ongoing challenge to the management agency. The current pothole detection, which is conducted image processing with a manual operation, is labour-intensive and time-consuming. Computer vision offers a mean to automate its visual inspection process using digital imaging, hence, identifying potholes from a series of images. The goal of this study is to apply different YOLO models for pothole detection. Three state-of-the-art object detection frameworks (i.e., YOLOv4, YOLOv4-tiny, and YOLOv5s) are experimented to measure their performance involved in real-time responsiveness and detection accuracy using the image set. The image set is identified by running the deep convolutional neural network (CNN) on several deep learning pothole detectors. After collecting a set of 665 images in 720 × 720 pixels resolution that captures various types of potholes on different road surface conditions, the set is divided into training, testing, and validation subsets. A mean average precision at 50% Intersection-over-Union threshold (mAP_0.5) is used to measure the performance of models. The study result shows that the mAP_0.5 of YOLOv4, YOLOv4-tiny, and YOLOv5s are 77.7%, 78.7%, and 74.8%, respectively. It confirms that the YOLOv4-tiny is the best fit model for pothole detection.


2021 ◽  
pp. 102-106
Author(s):  
Claudia Menzel ◽  
Gyula Kovács ◽  
Gregor U. Hayn-Leichsenring ◽  
Christoph Redies

Most artists who create abstract paintings place the pictorial elements not at random, but arrange them intentionally in a specific artistic composition. This arrangement results in a pattern of image properties that differs from image versions in which the same pictorial elements are randomly shuffled. In the article under discussion, the original abstract paintings of the author’s image set were rated as more ordered and harmonious but less interesting than their shuffled counterparts. The authors tested whether the human brain distinguishes between these original and shuffled images by recording electrical brain activity in a particular paradigm that evokes a so-called visual mismatch negativity. The results revealed that the brain detects the differences between the two types of images fast and automatically. These findings are in line with models that postulate a significant role of early (low-level) perceptual processing of formal image properties in aesthetic evaluations.


Author(s):  
Ying-Hui Wang ◽  
Ruo-Han Ma ◽  
Jia-Jun Li ◽  
Chuang-Chuang Mu ◽  
Yan-Ping Zhao ◽  
...  

Objectives: To evaluate the diagnostic efficacy of CBCT–MRI fused image for anterior disc displacement and bone changes of temporomandibular joint (TMJ), which are the main imaging manifestations of temporomandibular disorders (TMD). Methods: Two hundred and thirty-one TMJs of 120 patients who were diagnosed with TMD were selected for the study. The anterior disc displacement, bone defect and bone hyperplasia evaluated by three experts were used as a reference standard. Three residents individually evaluated all the three sets of images, which were CBCT images, MRI images and CBCT-MRI fused images from individual CBCT and MRI images in a random order for the above-mentioned three imaging manifestations with a five-point scale. Each set of images was observed at least 1 week apart. A second evaluation was performed 4 weeks later. Intra- and interobserver agreements were assessed using the intraclass correlation coefficient (ICC). The areas under the ROC curves (AUCs) of the three image sets were compared with a Z test, and p < 0.05 was considered statistically significant. Results: One hundred and forty-five cases were determined as anterior disc displacement, 84 cases as bone defect and 40 cases as bone hyperplasia. The intra- and interobserver agreements in the CBCT-MRI fused image set (0.76–0.91) were good to excellent, and the diagnostic accuracy for bone changes was significantly higher than that of MRI image set (p<0.05). Conclusions CBCT-MRI fused images can display the disc and surrounding bone structures simultaneously and significantly improve the observers’ reliability and diagnostic accuracy, especially for inexperienced residents.


Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1892
Author(s):  
Recep Eryigit ◽  
Bulent Tugrul

We report the results of an in-depth study of 15 variants of five different Convolutional Neural Network (CNN) architectures for the classification of seeds of seven different grass species that possess symmetry properties. The performance metrics of the nets are investigated in relation to the computational load and the number of parameters. The results indicate that the relation between the accuracy performance and operation count or number of parameters is linear in the same family of nets but that there is no relation between the two when comparing different CNN architectures. Using default pre-trained weights of the CNNs was found to increase the classification accuracy by ≈3% compared with training from scratch. The best performing CNN was found to be DenseNet201 with a 99.42% test accuracy for the highest resolution image set.


Author(s):  
Vincy Devi V. K ◽  
Rajesh R.

In human body genetic codes are stored in the genes. All of our inherited traits are associated with these genes and are grouped as structures generally called chromosomes. In typical cases, each cell consists of 23 pairs of chromosomes, out of which each parent contributes half. But if a person has a partial or full copy of chromosome 21, the situation is called Down syndrome. It results in intellectual disability, reading impairment, developmental delay, and other medical abnormalities. There is no specific treatment for Down syndrome. Thus, early detection and screening of this disability are the best styles for down syndrome prevention. In this work, recognition of Down syndrome utilizes a set of facial expression images. Solid geometric descriptor is employed for extracting the facial features from the image set. An AdaBoost method is practiced to gather the required data sets and for the categorization. The extracted information is then assigned and used to instruct the Neural Network using Backpropagation algorithm. This work recorded that the presented model meets the requirement with 98.67% accuracy.


Sign in / Sign up

Export Citation Format

Share Document