scholarly journals Using smart glasses for ultrasound diagnostics

2015 ◽  
Vol 1 (1) ◽  
pp. 196-197 ◽  
Author(s):  
Stefan Maas ◽  
Marvin Ingler ◽  
Heinrich Martin Overhoff

AbstractUltrasound has been established as a diagnostic tool in a wide range of applications. Especially for beginners, the alignment of sectional images to patient’s spatial anatomy can be cumbersome. A direct view onto the patient’s anatomy while regarding ultrasound images may help to overcome unergonomic examination.To solve these issues an affordable augmented reality system using smart glasses was created, that displays a (virtual) ultrasound image beneath a (real) ultrasound transducer.

2021 ◽  
Author(s):  
Alex Ufkes

Augmented Reality (AR) combines a live camera view of a real world environment with computer-generated virtual content. Alignment of these viewpoints is done by recognizing artificial fiducial markers, or, more recently, natural features already present in the environment. This is known as Marker-based and Markerless AR respectively. We present a markerless AR system that is not limited to artificial markers, but is capable of rendering augmentations over user-selected textured surfaces, or ‘maps’. The system stores and differentiates between multiple maps, all created online. Once recognized, maps are tracked using a hybrid algorithm based on feature matching and inlier tracking. With the increasing ubiquity and capability of mobile devices, we believe it is possible to perform robust, markerless AR on current generation tablets and smartphones. The proposed system is shown to operate in real-time on mobile devices, and generate robust augmentations under a wide range of map compositions and viewing conditions.


2021 ◽  
Author(s):  
Alex Ufkes

Augmented Reality (AR) combines a live camera view of a real world environment with computer-generated virtual content. Alignment of these viewpoints is done by recognizing artificial fiducial markers, or, more recently, natural features already present in the environment. This is known as Marker-based and Markerless AR respectively. We present a markerless AR system that is not limited to artificial markers, but is capable of rendering augmentations over user-selected textured surfaces, or ‘maps’. The system stores and differentiates between multiple maps, all created online. Once recognized, maps are tracked using a hybrid algorithm based on feature matching and inlier tracking. With the increasing ubiquity and capability of mobile devices, we believe it is possible to perform robust, markerless AR on current generation tablets and smartphones. The proposed system is shown to operate in real-time on mobile devices, and generate robust augmentations under a wide range of map compositions and viewing conditions.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2021 ◽  
Vol 43 (2) ◽  
pp. 74-87
Author(s):  
Weimin Zheng ◽  
Shangkun Liu ◽  
Qing-Wei Chai ◽  
Jeng-Shyang Pan ◽  
Shu-Chuan Chu

In this study, an automatic pennation angle measuring approach based on deep learning is proposed. Firstly, the Local Radon Transform (LRT) is used to detect the superficial and deep aponeuroses on the ultrasound image. Secondly, a reference line are introduced between the deep and superficial aponeuroses to assist the detection of the orientation of muscle fibers. The Deep Residual Networks (Resnets) are used to judge the relative orientation of the reference line and muscle fibers. Then, reference line is revised until the line is parallel to the orientation of the muscle fibers. Finally, the pennation angle is obtained according to the direction of the detected aponeuroses and the muscle fibers. The angle detected by our proposed method differs by about 1° from the angle manually labeled. With a CPU, the average inference time for a single image of the muscle fibers with the proposed method is around 1.6 s, compared to 0.47 s for one of the image of a sequential image sequence. Experimental results show that the proposed method can achieve accurate and robust measurements of pennation angle.


2013 ◽  
Vol 60 (9) ◽  
pp. 2636-2644 ◽  
Author(s):  
Hussam Al-Deen Ashab ◽  
Victoria A. Lessoway ◽  
Siavash Khallaghi ◽  
Alexis Cheng ◽  
Robert Rohling ◽  
...  

2016 ◽  
Vol 7 (1) ◽  
pp. 89-102 ◽  
Author(s):  
U. T. Okpara ◽  
L. C. Stringer ◽  
A. J. Dougill

Abstract. The science of climate security and conflict is replete with controversies. Yet the increasing vulnerability of politically fragile countries to the security consequences of climate change is widely acknowledged. Although climate conflict reflects a continuum of conditional forces that coalesce around the notion of vulnerability, how different portrayals of vulnerability influence the discursive formation of climate conflict relations remains an exceptional but under-researched issue. This paper combines a systematic discourse analysis with a vulnerability interpretation diagnostic tool to explore (i) how discourses of climate conflict are constructed and represented, (ii) how vulnerability is communicated across discourse lines, and (iii) the strength of contextual vulnerability against a deterministic narrative of scarcity-induced conflict, such as that pertaining to land. Systematically characterising climate conflict discourses based on the central issues constructed, assumptions about mechanistic relationships, implicit normative judgements and vulnerability portrayals, provides a useful way of understanding where discourses differ. While discourses show a wide range of opinions "for" and "against" climate conflict relations, engagement with vulnerability has been less pronounced – except for the dominant context centrism discourse concerned about human security (particularly in Africa). In exploring this discourse, we observe an increasing sense of contextual vulnerability that is oriented towards a concern for complexity rather than predictability. The article concludes by illustrating that a turn towards contextual vulnerability thinking will help advance a constructivist theory-informed climate conflict scholarship that recognises historicity, specificity, and variability as crucial elements of contextual totalities of any area affected by climate conflict.


2009 ◽  
Vol 5 (4) ◽  
pp. 415-422 ◽  
Author(s):  
Ramesh Thoranaghatte ◽  
Jaime Garcia ◽  
Marco Caversaccio ◽  
Daniel Widmer ◽  
Miguel A. Gonzalez Ballester ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document