scholarly journals 435 Topological surface mapping with computer vision to measure cutaneous tissue deformation from digital images

2021 ◽  
Vol 141 (5) ◽  
pp. S75
Author(s):  
E.L. Larson ◽  
D.P. DeMeo ◽  
C. Shi ◽  
J.M. Galeotti ◽  
B.T. Carroll
Water ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1825
Author(s):  
Nur Muhadi ◽  
Ahmad Abdullah ◽  
Siti Bejo ◽  
Muhammad Mahadi ◽  
Ana Mijic

Flood disasters are considered annual disasters in Malaysia due to their consistent occurrence. They are among the most dangerous disasters in the country. Lack of data during flood events is the main constraint to improving flood monitoring systems. With the rapid development of information technology, flood monitoring systems using a computer vision approach have gained attention over the last decade. Computer vision requires an image segmentation technique to understand the content of the image and to facilitate analysis. Various segmentation algorithms have been developed to improve results. This paper presents a comparative study of image segmentation techniques used in extracting water information from digital images. The segmentation methods were evaluated visually and statistically. To evaluate the segmentation methods statistically, the dice similarity coefficient and the Jaccard index were calculated to measure the similarity between the segmentation results and the ground truth images. Based on the experimental results, the hybrid technique obtained the highest values among the three methods, yielding an average of 97.70% for the dice score and 95.51% for the Jaccard index. Therefore, we concluded that the hybrid technique is a promising segmentation method compared to the others in extracting water features from digital images.


This paper presents an approach to cartoonize digital images into cartoon-like. The method used is different from as others used previously. This paper focuses on various techniques involved during the whole process, which, when used on layer by layer, gives an appropriately balanced output. We tend to explore different functions that can be integrated together in a particular pattern to get a filtered and composed output. The mathematical approach to different functions has also been explicated and the working is explained in detail. This system aims to use the filters' full functionality, which will help both in application and research, and can work as a framework to any computer vision-related systems and can be improved and embedded with other systems to work both as an independent module or as an integrated system.


2020 ◽  
pp. 147592172091722
Author(s):  
Hyunjin Bae ◽  
Keunyoung Jang ◽  
Yun-Kyu An

This article proposes a new end-to-end deep super-resolution crack network (SrcNet) for improving computer vision–based automated crack detectability. The digital images acquired from large-scale civil infrastructures for crack detection using unmanned robots often suffer from motion blur and lack of pixel resolution, which may degrade the corresponding crack detectability. The proposed SrcNet is able to significantly enhance the crack detectability by augmenting the pixel resolution of the raw digital image through deep learning. SrcNet basically consists of two phases: phase I—deep learning–based super resolution (SR) image generation and phase II—deep learning–based automated crack detection. Once the raw digital images are obtained from a target bridge surface, phase I of SrcNet generates the corresponding SR images to the raw digital images. Then, phase II automatically detects cracks from the generated SR images, making it possible to remarkably improve the crack detectability. SrcNet is experimentally validated using the digital images obtained using a climbing robot and an unmanned aerial vehicle from in situ concrete bridges located in South Korea. The validation test results reveal that the proposed SrcNet shows 24% better crack detectability compared to the crack detection results using the raw digital images.


The processing of multimedia content is used for real-world computer vision in various applications, and digital images make up a large part of multimedia data. The processing of multimedia content is used for real-computer vision in various applications, and digital images make up a large part of multimedia data. Content-based Retrieval of photographs (CBIR) is a system of picture recovery which utilizes the visual highlights of a picture, for example, shading, shape and surface so as to look through the client based inquiry pictures from the huge databases. CBIR relies upon highlight extraction of a picture which are the visual highlights and these highlights are extricated naturally i.e. without human collaboration. We intend in this paper to provide a detailed overview of recent developments related to CBIR and image representation. We researched the main aspects of various models of image recovery and image representation from low-level feature extraction to recent semantict ML approaches. And, for extraction of features, HSV, image segmentation and color histogram techniques are used, which effectively gives us the main point in an image that these techniques are used to minimize complexity, expense, and energy and time consumption. Then a machine learning model is trained for similarity test and the validation and texting phases are performed accordingly which leads to better performance as. Then a machine learning model is trained for similarity testing and then the validation and texting steps are performed accordingly, resulting in improved results compared to previously performed techniques. The precision values in the proposed technique are fairly excellent.


Author(s):  
Emmanuel Udoh

Computer vision or object recognition complements human or biological vision using techniques from machine learning, statistics, scene reconstruction, indexing and event analysis. Object recognition is an active research area that implements artificial vision in software and hardware. Some application examples are autonomous robots, surveillance, indexing databases of pictures and human computer interaction. This visual aid is beneficial to users, because humans remember information with greater accuracy when it is presented visually than when it originates in writing, speech or in kinesthetic form. Linguistic indexing adds another dimension to computer vision by automatically assigning words or textual descriptions to images. This augments content-based image retrieval (CBIR) that extracts or searches for digital images in large databases. According to Li and Wang (2003), most of the existing CBIR projects are general-purpose image retrieval systems that search images visually similar to a query sketch. Current CBIR systems are incapable of assigning words automatically to images due to the inherent difficulty of recognizing numerous objects at once. This current situation is stimulating several research endeavors that seek to assign text to images, thereby improving image retrieval in large databases. To enhance information processing using object recognition techniques, current research has focused on automatic linguistic indexing of digital images (ALIDI). ALIDI requires a combination of mathematical, statistical, computational, and graphical backgrounds. Many researchers have focused on various aspects of linguistic processing such as CBIR (Ghosal, Ircing, & Khudanpur, 2005; Iqbal & Aggarwal, 2002, Wang, 2001) machine learning techniques (Iqbal & Aggarwal, 2002), digital library (Witen & Bainbridge, 2003) and statistical modeling (Li, Gray, & Olsen, 20004, Li & Wang, 2003). A growing approach is the utilization of statistical models as demonstrated by Li and Wang (2003). It entails building databases of images to be used for supervised learning. A trained system is used to recognize and identify new images with statistical error margin. This statistical modeling approach uses a hidden Markov model to extract representative information about any category of images analyzed. However, in using computer to recognize images with textual description, some of the researchers employ solely text-based approaches. In this article, the focus is on the computational and graphical aspects of ALIDI in a system that uses Web-based access in order to enable wider usage (Ntoulas, Chao, & Cho, 2005). This system uses image composition (primary hue and saturation) in the linguistic indexing of digital images or pictures.


2018 ◽  
Vol 7 (1) ◽  
pp. 57-66
Author(s):  
Hussein Ali Mezher Alhamzawi

In this Article we present a way to implementation and detect the face and eyes  on digital image, based on Haar-like features extraction and cascade classifier, these techniques used in 100 % and 92% for faces and eyes detection respectively for the best all cases using low processing time , we used cheap equipment in our work (Acer TravelMate web camera ) . OpenCV library(computer vision library) and Python language used in this work.


2021 ◽  
Author(s):  
Moritz D Luerig

Digital images are a ubiquitous way to represent phenotypes. More and more ecologists and evolutionary biologists are using images to capture and analyze high dimensional phenotypic data to understand complex developmental and evolutionary processes. As a consequence, images are being collected at ever increasing rates, already outpacing our abilities for processing and analysis of the contained phenotypic information. phenopype is a high throughput phenotyping package for the programming language Python to support ecologists and evolutionary biologists in extracting high dimensional phenotypic data from digital images. phenopype integrates existing state-of-the-art computer vision functions (using the OpenCV library as a backend), GUI-based interactions, and a project management ecosystem to facilitate rapid data collection and reproducibility. phenopype offers three different workflow types that support users during different stages of scientific image analysis (prototyping, low-throughput, and high-throughput). In the high-throughput workflow, users interact with human-readable YAML configuration files to effectively modify settings for different images. These settings are stored along with processed images and results, so that the acquired phenotypic information becomes highly reproducible. phenopype combines the advantages of the Python environment, with its state-of-the-art computer vision, array manipulation and data handling libraries, and basic GUI capabilities, which allow users to step into the automatic workflow when necessary. Overall, phenopype is aiming to augment, rather than replace the utility of existing Python CV libraries, allowing biologists to focus on rapid and reproducible data collection.


2020 ◽  
Vol 4 (4) ◽  
pp. 751-756
Author(s):  
Hadid Tunas Bangsawan ◽  
Lukman Hanafi ◽  
Deny Suryana

Computer Vision (CV) is an interdisciplinary scientific field that discusses how computers can gain a high-level understanding of digital images or video. A system has been created that is capable of detecting a compact fluoresence lamp (CFL) light. However, in previous research there is no justification that the lamp is only a part that can glow on the lamp alone and has not been done in multi-lamp testing. This study aims to compare the lamp segmentation when it goes OFF and ON so that it could be justified the accuracy of this system and does multi-lamp testing. The method used is an experiment with collecting data by direct observation of the results of the system made. The system consists of a single board computer and a common webcam. The result is the difference between the lamp segmentation when it goes OFF and ON is small with the appropriate threshold setting. So that lamp light imaging had been made could function with good reability.  


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Radhika Kamath ◽  
Mamatha Balachandra ◽  
Srikanth Prabhu

Weeds are unwanted plants that grow among crops. These weeds can significantly reduce the yield and quality of the farm output. Unfortunately, site-specific weed management is not followed in most of the cases. That is, instead of treating a field with a specific type of herbicide, the field is treated with a broadcast herbicide application. This broadcast application of the herbicide has resulted in herbicide-resistant weeds and has many ill effects on the natural environment. This has prompted many research studies to seek the most effective weed management techniques. One such technique is computer vision-based automatic weed detection and identification. Using this technique, weeds can be detected and identified and a suitable herbicide can be recommended to the farmers. Therefore, it is important for the computer vision technique to successfully identify and classify the crops and weeds from the digital images. This paper investigates the multiple classifier systems built using support vector machines and random forest classifiers for plant classification in classifying paddy crops and weeds from digital images. Digital images of paddy crops and weeds from the paddy fields were acquired using three different cameras fixed at different heights from the ground. Texture, color, and shape features were extracted from the digital images after background subtraction and used for classification. A simple and new method was used as a decision function in the multiple classifier systems. An accuracy of 91.36% was obtained by the multiple classifier systems and was found to outperform single classifier systems.


Sign in / Sign up

Export Citation Format

Share Document