scholarly journals Pengaruh Konversi Nurbs Ke Polygonal Pada Desain Mobil 3d Terhadap Penilaian Kualitas 3d Model

2021 ◽  
Vol 6 (2) ◽  
pp. 119
Author(s):  
Awaludin Abid ◽  
Kusrini Kusrini ◽  
Amir Fatah Sofyan

Di Industri otomotif, biaya prototyping meningkat berbanding lurus dengan kompleksitas dan dependensi kendaraan. Sebagai alternatif untuk prototyping fisik dapat memanfaatkan teknologi baru seperti Augmented Reality (AR) dan Virtual Reality (VR) digunakan. Penggunaan VR dan AR melibatkan real-time rendering data CAD yang mengkonsumsi banyak memori dan mengurangi kinerja aplikasi. Persiapan data memiliki peran penting untuk meningkatkan kinerja sementara tetap mempertahankan topologi dan kualitas mesh. Proses optimalisasi data CAD yang digunakan yaitu Tessellation atau mengkonversi NURBS ke Polygons, berperan untuk menghasilkan output data yang memiliki efisien kinerja dengan topologi serta kualitas mesh yang baik. Hadirnya software 3D Data preparation dan optimasi pada kelas Tessellator. Autodesk Maya merupakan software pemodelan 3D yang mendukung Non-Uniform Rational Basis Spline ataupun CAD memiliki fitur mengkonversi model NURBS ke polygons, pemilihan kebutuhan atau requirement pada tessellation berpengaruh terhadap hasil output. Penilaian dilakukan menggunakan penilaian Objektif menggunakan 3D mesh visual quality metrics berbasis vertex-position Hausdorff Distance sehingga didapatkan requirement pada Tessellation yang efektif. Hasil dari konversi memiliki topologi yang serupa dengan software khusus data preparation dan optimasi, sedangkan hasil penilaian mesh visual quality metrics requirement yang mendekati yaitu menggunakan Tessellation Method Count dan General. Kata Kunci— Tessellation, Mesh Visual Quality, CAD, Polygon In automotive industry, cost of prototyping increases directly with complexity and dependencies of vehicle. As an alternative to physical prototyping can utilize new technologies such as Augmented Reality (AR) and Virtual Reality (VR) are used. And involves the real-time rendering of CAD data which consumes a lot of memory and reduces application performance. Data preparation has an important role to improve performance while maintaining topology and mesh quality. Process of optimizing CAD data used is Tessellation or converting NURBS to Polygons, whose role is to produce output data that has an efficient performance with topology and good mesh quality. Autodesk Maya is a 3D modeling software that supports Non-Uniform Rational Base Spline or CAD which has the feature of converting NURBS models to polygons, the selection of requirements or requirements on tessellation influences the output results. The assessment is done using objective assessment with 3D mesh visual quality metrics based on Hausdorff Distance vertex-position so that the requirements for effective Tessellation are obtained. The results of the conversion have a topology similar to special data preparation and optimization software, while the results of the mesh visual quality metrics requirement approach are close to using the Count and General Tessellation method. Keywords— Tessellation, Mesh Visual Quality, CAD, Polygon

2010 ◽  
Vol 40-41 ◽  
pp. 388-391 ◽  
Author(s):  
Shou Xiang Zhang

An unmanned mining technology for the fully mechanized longwall face automation production is proposed and studied. The essential technology will bring the longwall face production into visualization through the Virtual Reality (VR) and Augmented Reality (AR) union. Based on the visual theoretical model of the longwall face, the combination of virtual and reality, the real-time interactive and the 3D registration function were realized. The Key technology and Alpha channel are used to the combination of the real long wall face and the virtual user.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


2021 ◽  
Author(s):  
Ezgi Pelin Yildiz

Augmented reality is defined as the technology in which virtual objects are blended with the real world and also interact with each other. Although augmented reality applications are used in many areas, the most important of these areas is the field of education. AR technology allows the combination of real objects and virtual information in order to increase students’ interaction with physical environments and facilitate their learning. Developing technology enables students to learn complex topics in a fun and easy way through virtual reality devices. Students interact with objects in the virtual environment and can learn more about it. For example; by organizing digital tours to a museum or zoo in a completely different country, lessons can be taught in the company of a teacher as if they were there at that moment. In the light of all these, this study is a compilation study. In this context, augmented reality technologies were introduced and attention was drawn to their use in different fields of education with their examples. As a suggestion at the end of the study, it was emphasized that the prepared sections should be carefully read by the educators and put into practice in their lessons. In addition it was also pointed out that it should be preferred in order to communicate effectively with students by interacting in real time, especially during the pandemic process.


2015 ◽  
Vol 2015 ◽  
pp. 1-5 ◽  
Author(s):  
Kuei-Shu Hsu ◽  
Chia-Sui Wang ◽  
Jinn-Feng Jiang ◽  
Hung-Yuan Wei

Augmented reality technology is applied so that driving tests may be performed in various environments using a virtual reality scenario with the ultimate goal of improving visual and interactive effects of simulated drivers. Environmental conditions simulating a real scenario are created using an augmented reality structure, which guarantees the test taker’s security since they are not subject to real-life elements and dangers. Furthermore, the accuracy of tests conducted through virtual reality is not influenced by either environmental or human factors. Driver posture is captured in real time using Kinect’s depth perception function and then applied to driving simulation effects that are emulated by Unity3D’s gaming technology. Subsequently, different driving models may be collected through different drivers. In this research, nearly true and realistic street environments are simulated to evaluate driver behavior. A variety of different visual effects are easily available to effectively reduce error rates, thereby significantly improving test security as well as the reliability and reality of this project. Different situation designs are simulated and evaluated to increase development efficiency and build more security verification test platforms using such technology in conjunction with driving tests, vehicle fittings, environmental factors, and so forth.


Author(s):  
Sathiya Narayanan ◽  
Nikshith Narayan Ramesh ◽  
Amit Kumar Tyagi ◽  
L. Jani Anbarasi ◽  
Benson Edwin Raj

In the recent years, innovations such as Augmented Reality (AR), Virtual Reality (VR), and internet of things have enhanced user experience dramatically. In general, AR is completely different from VR and provides real-time solutions to users by projecting layers of information on real-world environments. Advancements in computer-generated sensory have made the concept of believable virtual environments a reality. With the availability of such technologies, one can investigate “how these technologies can be applied beyond gaming or other useful applications” and “how further improvements can be made to allow for full digital immersion.” This chapter provides a detailed description about AR and VR, followed by interesting real-world examples of AR applications. In addition, this chapter discusses the issues and challenges faced with AR/VR with a motivation of exploring the options for improvement.


Leonardo ◽  
2014 ◽  
Vol 47 (4) ◽  
pp. 325-336 ◽  
Author(s):  
Patrick Lichty

From ARToolkit’s emergence in the 1990s to the emergence of augmented reality (AR) as an art medium in the 2010s, AR has developed as a number of evidential sites. As an extension of virtual media, it merges real-time pattern recognition with goggles (finally realizing William Gibson’s sci-fi fantasy) or handheld devices. This creates a welding of real-time media and virtual reality, or an optically registered simulation overlaid upon an actual spatial environment. Commercial applications are numerous, including entertainment, sales, and navigation. Even though AR-based works can be traced back to the late 1990s, AR work requires some understanding of coding and tethered imaging equipment. It was not until marker-based AR, affording lower entries to usage, as well as geo-locational AR-based media, using handheld devices and tablets, that augmented reality as an art medium would propagate. While one can argue that AR-based art is a convergence of handheld device art and virtual reality, there are intrinsic gestures specific to augmented reality that make it unique. The author looks at some historical examples of AR as well as critical issues of AR-based gestures such as compounding the gaze, problematizing the retinal, and the representational issues of informatic overlays. This generates four gestural vectors, analogous to those defined in “The Translation of Art in Virtual Worlds,” which is examined through case studies. From this, a visual theory of augmentation will be proposed.


2020 ◽  
Author(s):  
Pearl Jishtu ◽  
Madhura A Yadav

AR and VR – simulation tools created to assist global evolution for saving time. Time as resource is difficult to harness; however, it would make work highly efficient and productive when tackled with automation. All concerned are excited about AR and VR’s involvement in our lifestyle, but not all have comprehended its impact. AR and VR in Architecture & Planning were introduced as assisting tools and has helped generate multiple design options, expanded possibilities of visualization, and provided us with more enhanced, detailed, and specific experience in real-time; enabling us to visualize the result of work at hand well before the commencement of the project. These tools are further being developed for city development decisions, helping citizens interact with local authorities, access public services, and plan their commute. After reviewing multiple research papers on AI, it was observed that all are moving forward with the changes brought by it, without entirely understanding its role. This paper provides an overview of the application of AR & VR in architecture and planning.


The Fingertip Detection acts a specific role in most of the vision based applications. The latest technologies like virtual reality and augmented reality actually follows this fingertip detection concept as its foundation. It is also helpful for Human Computer Interaction (HCI). So fingertip detection and tracking can be applied from games to robot control, from augmented reality to smart homes. The most important interesting field of fingertip detection is the gesture recognition related applications. In the context of interaction with the machines, gestures are the most simplest and efficient means of communication. This paper analyses the various works done in the areas of fingertip detection. A review on various real time fingertip methods is explained with different techniques and tools. Some challenges and research directions are also highlighted. Many researchers uses fingertip detection in HCI systems those have many applications in user identification, smart home etc. A comparison of results by different researchers is also included.


Author(s):  
Anjali Daisy

Augmented reality (AR) refers to the layering of visual information onto a live picture of your physical surroundings, enhancing the real-world environment in real-time. Both Snapchat and Instagram filters are current examples of augmented reality. Since this technology has shown its ability to catch users, more and more brands are using it to engage current and potential customers. In an environment where almost everyone has a Smartphone, augmented reality seems like an obvious next step since there is no need for the additional hardware. It is generally quite straightforward for people to use, and has a great capacity to enhance the effects of marketing.


Author(s):  
Yuzhu Lu ◽  
Shana Smith

In this paper, we present a prototype system, which uses CAVE-based virtual reality to enhance immersion in an augmented reality environment. The system integrates virtual objects into a real scene captured by a set of stereo remote cameras. We also present a graphic processing unit (GPU)-based method for computing occlusion between real and virtual objects in real time. The method uses information from the captured stereo images to determine depth of objects in the real scene. Results and performance comparisons show that the GPU-based method is much faster than prior CPU-based methods.


Sign in / Sign up

Export Citation Format

Share Document