scholarly journals Plenum a la Mode - Augmented Reality Fashions

2021 ◽  
Author(s):  
Geoffreyjen Edwards ◽  
Jonathan Caron-Roberge ◽  
Dominique Michaud ◽  
Jonathan Proulx Guimond

Inspired by ideas portrayed in science fiction, the authors sought to develop a set of augmented reality fashions that showcased scenes from a science fiction novel recently published by the principal author. The development team included artists and designers, a programmer, and the writer. Significant technical challenges needed to be overcome for success, including fabric construction and manipulation, image enhancement, robust image recognition and tracking capabilities, and the management of lighting and suitable backgrounds. Viewing geometries were also a non-trivial problem. The final solution permitted acceptable but not perfect real-time tracking of the fashion models and the visualization of both static and dynamic 3D elements overlaid onto the physical garments.

2021 ◽  
pp. 1-10
Author(s):  
Lipeng Si ◽  
Baolong Liu ◽  
Yanfang Fu

The important strategic position of military UAVs and the wide application of civil UAVs in many fields, they all mark the arrival of the era of unmanned aerial vehicles. At present, in the field of image research, recognition and real-time tracking of specific objects in images has been a technology that many scholars continue to study in depth and need to be further tackled. Image recognition and real-time tracking technology has been widely used in UAV aerial photography. Through the analysis of convolution neural network algorithm and the comparison of image recognition technology, the convolution neural network algorithm is improved to improve the image recognition effect. In this paper, a target detection technique based on improved Faster R-CNN is proposed. The algorithm model is implemented and the classification accuracy is improved through Faster R-CNN network optimization. Aiming at the problem of small target error detection and scale difference in aerial data sets, this paper designs the network structure of RPN and the optimization scheme of related algorithms. The structure of Faster R-CNN is adjusted by improving the embedding of CNN and OHEM algorithm, the accuracy of small target and multitarget detection is improved as a whole. The experimental results show that: compared with LENET-5, the recognition accuracy of the proposed algorithm is significantly improved. And with the increase of the number of samples, the accuracy of this algorithm is 98.9%.


2002 ◽  
Vol 3 (4) ◽  
pp. 277-287 ◽  
Author(s):  
Xubo B. Song ◽  
Yaser Abu-Mostafa ◽  
Joseph Sill ◽  
Harvey Kasdan ◽  
Misha Pavel

2022 ◽  
Vol 18 (1) ◽  
pp. 1-31
Author(s):  
Guohao Lan ◽  
Zida Liu ◽  
Yunfan Zhang ◽  
Tim Scargill ◽  
Jovan Stojkovic ◽  
...  

Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency . CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms.


IDEA JOURNAL ◽  
2020 ◽  
Vol 17 (02) ◽  
pp. 275-288
Author(s):  
J Rosenbaum

This art project examines non-binary and transgender identity through training machines to generate art based on Greek and Roman statuary. The statuary is binary in nature and appeals to the concept of pinnacles of masculinity and femininity but what of those of us who fall between, what of transgender bodies, gender non-conforming and non-binary bodies and intersex bodies?  Image recognition algorithms have a difficult time classifying people who fall outside the binary, those who don’t pass as cisgender and those who present in neutral or subversive ways. As image recognition becomes more prevalent, we need to have a past and a future for everyone who doesn’t fit neatly into one of the only two boxes on offer. We need to open up the categories, allow people to self-identify or to scrap the concept of gendering people mechanically all together. As a spatial installation, Hidden Worlds also explores the embodiment of interactive augmented reality bodies in the space between physical and digital worlds. I have worked with a classifier and some deliberately abstract figure works, generated by machine, to explore where gender is assigned in the process and what it looks like when you aren’t neatly classified, and the disconnect that is felt when misgendered. The generated captions have flipped around gender and as the figure resolves and each section is submitted to the narrative writer you see a different set of pronouns, a disconnection between what you see and what you hear. I will explore the assumptions we make about classical art; the way it can inform how we represent gender minorities going forward and how art can illustrate the gaps that exist in the training of these important machine learning systems.


Author(s):  
Juin-Ling Tseng

In general, most of the current augmented reality systems can combine 3D virtual scenes with live reality, and users usually interact with 3D objects of the augmented reality (AR) system through image recognition. Although the image-recognition technology has matured enough to allow users to interact with the system, the interaction process is usually limited by the number of patterns used to identify the image. It is not convenient to handle. To provide a more flexible interactive manipulation mode, this study imports the speech-recognition mechanism that allows users to operate 3D objects in an AR system simply by speech. In terms of implementation, the program uses Unity3D as the main development environment and the AR e-Desk as the main development platform. The AR e-Desk interacts through the identification mechanism of the reacTIVision and its markers. We use Unity3D to build the required 3D virtual scenes and objects in the AR e-Desk and import the Google Cloud Speech suite to the AR e-Desk system to develop the speech-interaction mechanism. Then, the intelligent AR system is developed.


Sign in / Sign up

Export Citation Format

Share Document