Frameless
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 15)

H-INDEX

0
(FIVE YEARS 0)

Published By Wallace Center

2693-8790

Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 1-1
Author(s):  
Ralf O. Schneider ◽  
Keyword(s):  

The recent development of Mixed Reality (MR) devices and apps hint at an exciting future. To be true to the inherent innovative spirit of the design profession, it is important to expose students to MR in the classroom in an exploratory setting.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 29-36
Author(s):  
Luane Davis Haggerty ◽  

Del-Sign is a physical approach to acting that uses elements of Francois Delsarte mime techniques with the foundations of American Sign Language. This acting and presentational technique uses cross-cultural physical communication as a way to deepen an actors’ performance, support a presenter’s lecture, or can be used as a format from which to create animations that communicate with or without verbal language. It is a historical fact that Deaf actors using the foundations of Sign Language influenced the movie industry (Higgins). In silent movie infancy Deaf performers were brought in as consultants to ensure that the gestures, relational positions, facial expression, camera angles and body language of the actors could have the strongest impact and the clearest meaning (Albert Ballin). At that time the standard acting technique was a codified movement study begun and refined by Francious Deslarte (1870-1890s Paris, 1880-1915 Steele MacKaye New York). By blending these two structures we find that an outline is gained for creating movement, posture and gesture (MPG) that easily communicates meaning. The applications of this performance technique are many and varied. From the obvious acting for stage application to lawyers, teachers, priests or other presenters. Del-Sign can now bridge into adding technology to the mix allowing for this approach to be used when creating characters and movement for VR, AR or MR.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 45-46
Author(s):  
Renée Stevens ◽  

My creative research looks at how Augmented Reality (AR) can help overcome Learning Disabilities. This focus has directed me to develop a number of ideas and concepts that seek to understand how the addition of information in an augmented view could help tackle some of the challenges different Learning Disabilities present while, at the same time, appealing to a larger audience. This concept was the foundation for creating the immersive mobile application tagAR™, which adds a digital name tag into your augmented view, replacing the traditional “Hello, my name is” sticker version. It allows you to see the names of people around you hovering above their heads at all times. Currently, the app is reliant on the user holding up a mobile device to use the camera so that the application can add the name tags to the view on your screen. As mainstream technology continues to advance, this app will work for wearable devices, eliminating the need to hold up your phone, which is a bit socially awkward. The target audience of the app is those who have trouble remembering people’s names, those who will benefit from having a visual of the names of people around them and for those who want to network and meet new people at social and educational events such as conferences or workshops. Through the design and development of this app, I have been exploring how the social aspect can be extended beyond just the name display and user search features to include a customizable tags that can be searchable, so users can connect, find people with similar interests, share contact information as well as connect with others on multiple social media platforms. I will discuss the research, design and developmental processes that influenced the concept and functionality of this social app using AR.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Joe Geigel ◽  
◽  
David Long ◽  

Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 26-28
Author(s):  
Tanat Boozayaangool ◽  

VRsus guARdian is the result of an amalgamation of two different gameplay mediums that challenged the gap separating different mixed reality platforms, namely virtual reality (VR) and augmented reality (AR). The game utilized each medium’s approach towards immersion as a design principle in building a natural, asymmetric play for both players. The game also constructed a compelling fantasy atop each medium’s unique interactive capability to build a dynamic narrative. Lastly, the cross-platform nature of VRsus guARdian caused the game to be highly dependent upon Unity, a game engine with highly accessible, platform-agnostic development capabilities.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 39-44
Author(s):  
Nora Pfund ◽  
◽  
Nitin Sampat ◽  
J. A. Stephen Viggiano ◽  
◽  
...  

High quality, 360 capture for Cinematic VR is a relatively new and rapidly evolving technology. The field demands very high quality, distortion- free 360 capture which is not possible with cameras that depend on fish- eye lenses for capturing a 360 field of view. The Facebook Surround 360 Camera, one of the few “players” in this space, is an open-source license design that Facebook has released for anyone that chooses to build it from off-the-shelf components and generate 8K stereo output using open-source licensed rendering software. However, the components are expensive and the system itself is extremely demanding in terms of computer hardware and software. Because of this, there have been very few implementations of this design and virtually no real deployment in the field. We have implemented the system, based on Facebook’s design, and have been testing and deploying it in various situations; even generating short video clips. We have discovered in our recent experience that high quality, 360 capture comes with its own set of new challenges. As an example, even the most fundamental tools of photography like “exposure” become difficult because one is always faced with ultra-high dynamic range scenes (one camera is pointing directly at the sun and the others may be pointing to a dark shadow). The conventional imaging pipeline is further complicated by the fact that the stitching software has different effects on various as- pects of the calibration or pipeline optimization. Most of our focus to date has been on optimizing the imaging pipeline and improving the qual- ity of the output for viewing in an Oculus Rift headset. We designed a controlled experiment to study 5 key parameters in the rendering pipeline– black level, neutral balance, color correction matrix (CCM), geometric calibration and vignetting. By varying all of these parameters in a combinatorial manner, we were able to assess the relative impact of these parameters on the perceived image quality of the output. Our results thus far indicate that the output image quality is greatly influenced by the black level of the individual cameras (the Facebook cam- era comprised of 17 cameras whose output need to be stitched to obtain a 360 view). Neutral balance is least sensitive. We are most confused about the results we obtain from accurately calculating and applying the CCM for each individual camera. We obtained improved results by using the average of the matrices for all cameras. Future work includes evaluating the effects of geometric calibration and vignetting on quality.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 1-6
Author(s):  
Eli Kuslansky ◽  

Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 37-38
Author(s):  
Michael J. Murdoch ◽  
◽  
Nargess Hassani ◽  
Sara Leary ◽  
◽  
...  

This presentation will summarize recent work on the visual perception of color appearance and object properties in optical see-through (OST) augmented reality (AR) systems. OST systems, such as Microsoft HoloLens, use a see- through display system to superimpose virtual content onto a user’s view of the real world. With careful tracking of both display and world coordinates, synthetic objects can be added to the real world, and real objects can be manipulated via synthetic overlays. Ongoing research studies how the combination of real and virtual stimuli are perceived and how users’ visual adaptation is affected; two specific examples will be explained.


Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Jennifer Poggi ◽  

Frameless ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 1-8
Author(s):  
Elizabeth Goins ◽  

Sign in / Sign up

Export Citation Format

Share Document