Optical datalink between display monitor and web camera

Author(s):  
Pimrapat Thanusutiyabhorn ◽  
Waleed S. Mohammed ◽  
Poompat Saengudomlert
Keyword(s):  
2018 ◽  
Vol 7 (2.7) ◽  
pp. 183
Author(s):  
M Vamsi Krishna ◽  
A Bhargav Reddy ◽  
V Sandeep

To verify that our daily life is going in a secure way. Lot of research programmers are going on in this entire society. The turning point comes through the internet of things, industry has been emerged with the lots of elements provided from IOT. We can able to connect our daily life things or objects with this had successfully evolved lots of things.  This Facial recognition door unlock system is a process is which will detect the face and identifies the among people. People are having different types of face cut, in that particularly there are many unique faces which are different from each other which inspired us, from that concept this process has been established. Our main aim to create the smart door system to a house, that will secure the house and all your personal things at your home. In this concept of our system we have been used alive web camera in the front side of the door, along with the display monitor. this web camera shows the owner/particular viewer the whom the house is his control, this shows the person who stood front of the door, the system is setup the voice output is being processed by the processor that which is used to show the answers/instructions as the output on the screen. We are using a stepper motor that which is used to lock/open then the by sliding method, so that a normal person stand in front of the door and access it. This process is done through this Microsoft face API application. The display is being operated on a Microsoft Visual Studio application.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2020 ◽  
Vol 3 (1) ◽  
pp. 33-41
Author(s):  
Hwunjae Lee ◽  
◽  
Junhaeng Lee ◽  

This study evaluated PSNR of server display monitor and client display monitor of DSA system. The signal is acquired and imaged during the surgery and stored in the PACS server. After that, distortion of the original signal is an important problem in the process of observation on the client monitor. There are many problems such as noise generated during compression and image storage/transmission in PACS, information loss during image storage and transmission, and deterioration in image quality when outputting medical images from a monitor. The equipment used for the experiment in this study was P's DSA. We used two types of monitors in our experiment, one is P’s company resolution 1280×1024 pixel monitor, and the other is W’s company resolution 1536×2048 pixel monitor. The PACS Program used MARO-view, and for the experiment, a PSNR measurement program using Visual C++ was implemented and used for the experiment. As a result of the experiment, the PSNR value of the kidney angiography image was 26.958dB, the PSNR value of the lung angiography image was 28.9174 dB, the PSNR value of the heart angiography image was 22.8315dB, and the PSNR value of the neck angiography image was 37.0319 dB, and the knee blood vessels image showed a PSNR value of 43.2052 dB, respectively. In conclusion, it can be seen that there is almost no signal distortion in the process of acquiring, storing, and transmitting images in PACS. However, it suggests that the image signal may be distorted depending on the resolution and performance of each monitor. Therefore, it will be necessary to evaluate the performance of the monitor and to maintain the performance.


Author(s):  
Fitri Utaminingrum ◽  
Gusti Pangestu ◽  
Dahnial Syauqy ◽  
Yuita Arum Sari ◽  
Timothy K. Shih
Keyword(s):  

Author(s):  
Dirk Desmet ◽  
Prabhat Avasare ◽  
Paul Coene ◽  
Stijn Decneut ◽  
Filip Hendrickx ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document