image capture
Recently Published Documents


TOTAL DOCUMENTS

708
(FIVE YEARS 184)

H-INDEX

26
(FIVE YEARS 5)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Wonsub Yun ◽  
J. Praveen Kumar ◽  
Sangjoon Lee ◽  
Dong-Soo Kim ◽  
Byoung-Kwan Cho

AbstractThe prevention of the loss of agricultural resources caused by pests is an important issue. Advances are being made in technologies, but current farm management methods and equipment have not yet met the level required for precise pest control, and most rely on manual management by professional workers. Hence, a pest detection system based on deep learning was developed for the automatic pest density measurement. In the proposed system, an image capture device for pheromone traps was developed to solve nonuniform shooting distance and the reflection of the outer vinyl of the trap while capturing the images. Since the black pine bast scale pest is small, pheromone traps are captured as several subimages and they are used for training the deep learning model. Finally, they are integrated by an image stitching algorithm to form an entire trap image. These processes are managed with the developed smartphone application. The deep learning model detects the pests in the image. The experimental results indicate that the model achieves an F1 score of 0.90 and mAP of 94.7% and suggest that a deep learning model based on object detection can be used for quick and automatic detection of pests attracted to pheromone traps.


Machines ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Mark Jacob Schrader ◽  
Peter Smytheman ◽  
Elizabeth H. Beers ◽  
Lav R. Khot

This note describes the development of a plug-in imaging system for pheromone delta traps used in pest population monitoring. The plug-in comprises an RGB imaging sensor integrated with a microcontroller unit and associated hardware for optimized power usage and data capture. The plug-in can be attached to the top of a modified delta trap to realize periodic image capture of the trap liner (17.8 cm × 17.8 cm). As configured, the captured images are stored on a microSD card with ~0.01 cm2 pixel−1 spatial resolution. The plug-in hardware is configured to conserve power, as it enters in sleep mode during idle operation. Twenty traps with plug-in units were constructed and evaluated in the 2020 field season for codling moth (Cydia pomonella) population monitoring in a research study. The units reliably captured images at daily interval over the course of two weeks with a 350 mAh DC power source. The captured images provided the temporal population dynamics of codling moths, which would otherwise be achieved through daily manual trap monitoring. The system’s build cost is about $33 per unit, and it has potential for scaling to commercial applications through Internet of Things-enabled technologies integration.


2022 ◽  
Author(s):  
Cary Smith ◽  
Walker McCord ◽  
Zhili Zhang ◽  
Naibo Jiang ◽  
Paul Hsu ◽  
...  
Keyword(s):  

OENO One ◽  
2022 ◽  
Vol 56 (1) ◽  
pp. 1-15
Author(s):  
Amber K. Parker ◽  
Jaco Fourie ◽  
Mike C. T. Trought ◽  
Kapila Phalawatta ◽  
Esther Meenken ◽  
...  

The time of flowering is key to understanding the development of grapevines. Flowering coincides with inflorescence initiation and fruit set, important determinants of yield. This research aimed to determine between and within-vine variability in 4-cane-pruned Sauvignon blanc inflorescence number per shoot, number of flowers per inflorescence and flowering progression using an objective method of assessing flowering via image capture and statistical analysis using a Bayesian modelling framework. The inflorescence number and number of flowers per inflorescence were measured by taking images over the flowering period. Flowering progression was assessed by counting open and closed flowers for each image over two seasons. An ordinal multinomial generalised linear mixed-effects model (GLMM) was fitted for inflorescence number, a Poisson GLMM for flower counts and a binomial GLMM for flowering progression. All the models were fitted and interpreted within a Bayesian modelling framework. Shoots arising from cane node one had lower numbers of inflorescences compared to those at nodes 3, 5 and 7, which were similar. The number of flowers per inflorescence was greater for basal inflorescences on a shoot than apical ones. Flowering was earlier, by two weeks, and faster in 2017/18 when compared to 2018/19 reflecting seasonal temperature differences. The time and duration of flowering varied at each inflorescence position along the cane. While basal inflorescences flowered later and apical earlier at lower insertion points on the shoot, the variability in flowering at each position on the vine dominated the date and duration of flowering.This is the first study using a Bayesian modelling framework to assess variability inflorescence presence and flower number, as well as flowering progression via objective quantification of open and closed flower counts rather than the more subjective method of visual estimation in the field or via cuttings. Although flower number differed for apical and basal bunches, little difference in timing and progression of flowering by these categories was observed. The node insertion point along a shoot was more important. Overall, the results indicate individual inflorescence variation and season are the key factors driving flowering variability and are most likely to impact fruit set and yield.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 334
Author(s):  
Li Li ◽  
Ahmed A. Abd El-Latif ◽  
Sajad Jafari ◽  
Karthikeyan Rajagopal ◽  
Fahimeh Nazarimehr ◽  
...  

Multimedia data play an important role in our daily lives. The evolution of internet technologies means that multimedia data can easily participate amongst various users for specific purposes, in which multimedia data confidentiality and integrity have serious security issues. Chaos models play an important role in designing robust multimedia data cryptosystems. In this paper, a novel chaotic oscillator is presented. The oscillator has a particular property in which the chaotic dynamics are around pre-located manifolds. Various dynamics of the oscillator are studied. After analyzing the complex dynamics of the oscillator, it is applied to designing a new image cryptosystem, in which the results of the presented cryptosystem are tested from various viewpoints such as randomness, time encryption, correlation, plain image sensitivity, key-space, key sensitivity, histogram, entropy, resistance to classical types of attacks, and data loss analyses. The goal of the paper is proposing an applicable encryption method based on a novel chaotic oscillator with an attractor around a pre-located manifold. All the investigations confirm the reliability of using the presented cryptosystem for various IoT applications from image capture to use it.


2021 ◽  
Vol 15 (3) ◽  
pp. 104-132
Author(s):  
Lorenzo Rinelli

Esta intervenção surgiu do aumento da utilização de tecnologias de reconhecimento facial na gestão da população, mais especificamente no controle da migração europeia. Inspirado por essas circunstâncias, reflito sobre o uso de lentes ópticas desde o início do uso da câmera pelos europeus como arma colonial até os dispositivos de captura de imagens atuais como uma ferramenta de rastreamento para detectar e acampar pessoas em movimento. Creio que esta metodologia arqueológica com sensibilidade estética permite revelar como as técnicas disciplinares contemporâneas de captação de imagens são produzidas por uma relação complexa de poder e saber enquadrada na mesma lógica biométrica de procura da verdade que marcou a dominação colonial europeia. Concluo minha intervenção apresentando uma poderosa obra de arte de uma artista contemporânea que rompe a reivindicação ilusória de verdade científica e imparcialidade que ainda coloniza o sistema de verificação visual e evocando raízes africanas esquecidas da modernidade, em última instância perturbando seu conjunto de relações de poder.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Simon Emberton ◽  
Christopher Simons

AbstractWithin the worldwide diving community, underwater photography is becoming increasingly popular. However, the marine environment presents certain challenges for image capture, with resulting imagery often suffering from colour distortions, low contrast and blurring. As a result, image enhancement software is used not only to enhance the imagery aesthetically, but also to address these degradations. Although feature-rich image enhancement software products are available, little is known about the user experience of underwater photographers when interacting with such tools. To address this gap, we conducted an online questionnaire to better understand what software tools are being used, and face-to-face interviews to investigate the characteristics of the image enhancement user experience for underwater photographers. We analysed the interview transcripts using the pragmatic and hedonic categories from the frameworks of Hassenzahl (Funology, Kluwer Academic Publishers, Dordrecht, pp 31–42, 2003; Funology 2, Springer, pp 301–313, 2018) for positive and negative user experience. Our results reveal a moderately negative experience overall for both pragmatic and hedonic categories. We draw some insights from the findings and make recommendations for improving the user experience for underwater photographers using image enhancement tools.


2021 ◽  
Vol 13 (24) ◽  
pp. 5135
Author(s):  
Yahya Alshawabkeh ◽  
Ahmad Baik ◽  
Ahmad Fallatah

The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. The village is a unique example of traditional human settlements, and represents a complex natural and cultural heritage site. The proposed workflow divides data integration into two levels. At the basic level, UAV photogrammetry with enhanced mobility and visibility is used to map the ragged terrain and supplement TLS point data in upper and unaccusable building zones where shadow data originated. The merging of point clouds ensures that the building’s overall geometry is correctly rebuilt and that data interpretation is improved during HBIM digitization. In addition to the correct geometry, texture mapping is particularly important in the area of cultural heritage. Constructing a realistic texture remains a challenge in HBIM; because the standard texture and materials provided in BIM libraries do not allow for reliable representation of heritage structures, mapping and sharing information are not always truthful. Thereby, at the second level, the workflow proposed true orthophoto texturing method for HBIM models by combining close-range imagery and laser data. True orthophotos have uniform scale that depicts all objects in their respective planimetric positions, providing reliable and realistic mapping. The process begins with the development of a Digital Surface Model (DSM) by sampling TLS 3D points in a regular grid, with each cell uniquely associated with a model point. Then each DSM cell is projected in the corresponding perspective imagery in order to map the relevant spectral information. The methods allow for flexible data fusion and image capture using either a TLS-installed camera or a separate camera at the optimal time and viewpoint for radiometric data. The developed workflows demonstrated adequate results in terms of complete and realistic textured HBIM, allowing for a better understanding of the complex heritage structures.


Iproceedings ◽  
10.2196/35391 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e35391
Author(s):  
Ibukun Oloruntoba ◽  
Toan D Nguyen ◽  
Zongyuan Ge ◽  
Tine Vestergaard ◽  
Victoria Mar

Background Convolutional neural networks (CNNs) are a type of artificial intelligence that show promise as a diagnostic aid for skin cancer. However, the majority are trained using retrospective image data sets of varying quality and image capture standardization. Objective The aim of our study is to use CNN models with the same architecture, but different training image sets, and test variability in performance when classifying skin cancer images in different populations, acquired with different devices. Additionally, we wanted to assess the performance of the models against Danish teledermatologists when tested on images acquired from Denmark. Methods Three CNNs with the same architecture were trained. CNN-NS was trained on 25,331 nonstandardized images taken from the International Skin Imaging Collaboration using different image capture devices. CNN-S was trained on 235,268 standardized images, and CNN-S2 was trained on 25,331 standardized images (matched for number and classes of training images to CNN-NS). Both standardized data sets (CNN-S and CNN-S2) were provided by Molemap using the same image capture device. A total of 495 Danish patients with 569 images of skin lesions predominantly involving Fitzpatrick skin types II and III were used to test the performance of the models. Four teledermatologists independently diagnosed and assessed the images taken of the lesions. Primary outcome measures were sensitivity, specificity, and area under the curve of the receiver operating characteristic (AUROC). Results A total of 569 images were taken from 495 patients (n=280, 57% women, n=215, 43% men; mean age 55, SD 17 years) for this study. On these images, CNN-S achieved an AUROC of 0.861 (95% CI 0.830-0.889; P<.001), and CNN-S2 achieved an AUROC of 0.831 (95% CI 0.798-0.861; P=.009), with both outperforming CNN-NS, which achieved an AUROC of 0.759 (95% CI 0.722-0.794; P<.001; P=.009). When the CNNs were matched to the mean sensitivity and specificity of the teledermatologists, the model’s resultant sensitivities and specificities were surpassed by the teledermatologists. However, when compared to CNN-S, the differences were not statistically significant (P=.10; P=.05). Performance across all CNN models and teledermatologists was influenced by the image quality. Conclusions CNNs trained on standardized images had improved performance and therefore greater generalizability in skin cancer classification when applied to an unseen data set. This is an important consideration for future algorithm development, regulation, and approval. Further, when tested on these unseen test images, the teledermatologists clinically outperformed all the CNN models; however, the difference was deemed to be statistically insignificant when compared to CNN-S. Conflicts of Interest VM received speakers fees from Merck, Eli Lily, Novartis and Bristol Myers Squibb. VM is the principal investigator for a clinical trial funded by the Victorian Department of Health and Human Services with 1:1 contribution from MoleMap.


Sign in / Sign up

Export Citation Format

Share Document