scholarly journals Tensor-to-Image: Image-to-Image Translation with Vision Transformers

Author(s):  
Yigit Gunduc

Transformers gain huge attention since they are first introduced and have a wide range of applications. Transformers start to take over all areas of deep learning and the Vision transformers paper also proved that they can be used for computer vision tasks. In this paper, we utilized a vision transformerbased custom-designed model, tensor-to-image, for the image to image translation. With the help of self-attention, our model was able to generalize and apply to different problems without a single modification.

2021 ◽  
Author(s):  
Yigit Gunduc

Transformers gain huge attention since they are first introduced and have a wide range of applications. Transformers start to take over all areas of deep learning and the Vision transformers paper also proved that they can be used for computer vision tasks. In this paper, we utilized a vision transformerbased custom-designed model, tensor-to-image, for the image to image translation. With the help of self-attention, our model was able to generalize and apply to different problems without a single modification.


2020 ◽  
Author(s):  
Cedar Warman ◽  
Christopher M. Sullivan ◽  
Justin Preece ◽  
Michaela E. Buchanan ◽  
Zuzana Vejlupkova ◽  
...  

AbstractHigh-throughput phenotyping systems are powerful, dramatically changing our ability to document, measure, and detect biological phenomena. Here, we describe a cost-effective combination of a custom-built imaging platform and deep-learning-based computer vision pipeline. A minimal version of the maize ear scanner was built with low-cost and readily available parts. The scanner rotates a maize ear while a cellphone or digital camera captures a video of the surface of the ear. Videos are then digitally flattened into two-dimensional ear projections. Segregating GFP and anthocyanin kernel phenotype are clearly distinguishable in ear projections, and can be manually annotated using image analysis software. Increased throughput was attained by designing and implementing an automated kernel counting system using transfer learning and a deep learning object detection model. The computer vision model was able to rapidly assess over 390,000 kernels, identifying male-specific transmission defects across a wide range of GFP-marked mutant alleles. This includes a previously undescribed defect putatively associated with mutation of Zm00001d002824, a gene predicted to encode a vacuolar processing enzyme (VPE). We show that by using this system, the quantification of transmission data and other ear phenotypes can be accelerated and scaled to generate large datasets for robust analyses.One sentence summaryA maize ear phenotyping system built from commonly available parts creates images of the surface of ears and identifies kernel phenotypes with a deep-learning-based computer vision pipeline.


2021 ◽  
Author(s):  
RM Connolly ◽  
DV Fairclough ◽  
EL Jinks ◽  
EM Ditria ◽  
G Jackson ◽  
...  

AbstractThe ongoing need to sustainably manage fishery resources necessitates fishery-independent monitoring of the status of fish stocks. Camera systems, particularly baited remote underwater video stations (BRUVS), are a widely-used and repeatable method for monitoring relative abundance, required for building stock assessment models. The potential for BRUVS-based monitoring is restricted, however, by the substantial costs of manual data extraction from videos. Computer vision, in particular deep learning models, are increasingly being used to automatically detect and count fish at low abundances in videos. One of the advantages of BRUVS is that bait attractants help to reliably detect species in relatively short deployments (e.g. 1 hr). The high abundances of fish attracted to BRUVS, however, make computer vision more difficult, because fish often occlude other fish. We build upon existing deep learning methods for identifying and counting a target fisheries species across a wide range of fish abundances. Using BRUVS imagery targeting a recovering fishery species, Australian snapper (Chrysophrys auratus), we tested combinations of three further mathematical steps likely to generate accurate, efficient automation: 1) varying confidence thresholds (CTs), 2) on/off use of sequential non-maximum suppression (Seq-NMS), and 3) statistical correction equations. Output from the deep learning model was accurate at very low abundances of snapper; at higher abundances, however, the model over-predicted counts by as much as 50%. The procedure providing the most accurate counts across all fish abundances, with counts either correct or within 1 to 2 of manual counts (R2 = 93.4%), used Seq-NMS, a 55% confidence threshold, and a cubic polynomial corrective equation. The optimised modelling provides an automated procedure offering an effective and efficient method for accurately identifying and counting snapper in BRUV footage. Further testing is required to ensure that automated counts of snapper remain accurate in the survey region over time, and to determine the applicability to other regions within the distributional range of this species. For monitoring stocks of fishery species more generally, the specific equations will differ but the procedure demonstrated here would help to increase the usefulness of BRUVS, while decreasing costs.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012079
Author(s):  
Makkena Brahmaiah ◽  
Srinivasa Rao Madala ◽  
Ch Mastan Chowdary

Abstract As crime rates rise at large events and possibly lonely places, security is always a top concern in every field. A wide range of issues may be solved with the use of computer vision, including anomalous detection and monitoring. Intelligence monitoring is becoming more dependent on video surveillance systems that can recognise and analyse scene and anomaly occurrences. Using SSD and Faster RCNN techniques, this paper provides automated gun (or weapon) identification. Use of two different kinds of datasets is included in the proposed approach. As opposed to the first dataset, the second one comprises pictures that have been manually tagged. However, the trade-off between speed and precision in real-world situations determines whether or not each method will be useful.


Author(s):  
Toke T. Høye ◽  
Johanna Ärje ◽  
Kim Bjerge ◽  
Oskar L. P. Hansen ◽  
Alexandros Iosifidis ◽  
...  

ABSTRACTMost animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is still sparse. Insect populations are challenging to study and most monitoring methods are labour intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors that can effectively, continuously, and non-invasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the lab. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behaviour, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to the big data outputs to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) Validation of image-based taxonomic identification, 2) generation of sufficient training data, 3) development of public, curated reference databases, and 4) solutions to integrate deep learning and molecular tools.Significance statementInsect populations are challenging to study, but computer vision and deep learning provide opportunities for continuous and non-invasive monitoring of biodiversity around the clock and over entire seasons. These tools can also facilitate the processing of samples in a laboratory setting. Automated imaging in particular can provide an effective way of identifying and counting specimens to measure abundance. We present examples of sensors and devices of relevance to entomology and show how deep learning tools can convert the big data streams into ecological information. We discuss the challenges that lie ahead and identify four focal areas to make deep learning and computer vision game changers for entomology.


2021 ◽  
Vol 2021 (1) ◽  
pp. 11-15
Author(s):  
Marco Leonardi ◽  
Paolo Napoletano ◽  
Alessandro Rozza ◽  
Raimondo Schettini

Automatic assessment of image aesthetics is a challenging task for the computer vision community that has a wide range of applications. The most promising state-of-the-art approaches are based on deep learning methods that jointly predict aesthetics-related attributes and aesthetics score. In this article, we propose a method that learns the aesthetics score on the basis of the prediction of aesthetics-related attributes. To this end, we extract a multi-level spatially pooled (MLSP) features set from a pretrained ImageNet network and then these features are used to train a Multi Layer Perceptron (MLP) to predict image aesthetics-related attributes. A Support Vector Regression machine (SVR) is finally used to estimate the image aesthetics score starting from the aesthetics-related attributes. Experimental results on the ”Aesthetics with Attributes Database” (AADB) demonstrate the effectiveness of our approach that outperforms the state of the art of about 5.5% in terms of Spearman’s Rankorder Correlation Coefficient (SROCC).


Author(s):  
Bobburi Taralathasri ◽  
Dammati Vidya Sri ◽  
Gadidammalla Narendra Kumar ◽  
Annam Subbarao ◽  
Palli R Krishna Prasad

The major and wide range applications like Driverless cars, robots, Image surveillance has become famous in the Computer vision .Computer vision is the core in all those applications which is responsible for the image detection and it became more popular worldwide. Object Detection System using Deep Learning Technique” detects objects efficiently based on YOLO algorithm and applies the algorithm on image data to detect objects.


2021 ◽  
Vol 118 (2) ◽  
pp. e2002545117
Author(s):  
Toke T. Høye ◽  
Johanna Ärje ◽  
Kim Bjerge ◽  
Oskar L. P. Hansen ◽  
Alexandros Iosifidis ◽  
...  

Most animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is sparse. Insect populations are challenging to study, and most monitoring methods are labor intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors can effectively, continuously, and noninvasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the laboratory. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behavior, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to exceptionally large datasets to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) validation of image-based taxonomic identification; 2) generation of sufficient training data; 3) development of public, curated reference databases; and 4) solutions to integrate deep learning and molecular tools.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sign in / Sign up

Export Citation Format

Share Document