Low-cost, high-definition video documentation of corrective cleft surgeries using a fixed laparoscope

2014 ◽  
Vol 67 (2) ◽  
pp. e58-e59 ◽  
Author(s):  
Patrick DeMoss ◽  
Kariuki P. Murage ◽  
Sunil Tholpady ◽  
Michael Friel ◽  
Robert J. Havlik ◽  
...  
2017 ◽  
Vol 24 (4) ◽  
pp. 369-372 ◽  
Author(s):  
Rui Sergio Monteiro de Barros ◽  
Marcus Vinicius Henriques Brito ◽  
Renan Kleber Costa Teixeira ◽  
Vitor Nagai Yamaki ◽  
Felipe Lobato da Silva Costa ◽  
...  

Background: Although all microsurgeries are based on the use of surgical microscopes, several alternative magnification systems have shown promising results. Improvements in image quality facilitated the use of video systems in microsurgeries with safety and accuracy. The aim of this study was to evaluate the use of a low-cost, video-assisted magnification system in peripheral neurorrhaphy in rats. Methods: Twenty Wistar rats were randomly divided into 2 matched groups according to the magnification system used: the microscope group, with neurorrhaphy performed under a microscope with an image magnification of 40×; and the video system group, with the procedures performed under a video system composed of a high-definition Sony camcorder DCR-SR42 set to 52× magnification, macro lenses, 42-inch television, and a digital HDMI cable. We analyzed weight, nerve caliber, total surgery time, neurorrhaphy time, number of stitches, and number of axons in both ends (proximal and distal). Results: There were no significant differences between groups in weight, nerve caliber, or number of stitches. Neurorrhaphy under the video system took longer (video: 5.60 minutes; microscope: 3.20 minutes; P < .05). Number of axons was similar between groups, both in proximal and distal stumps. Conclusion: It is possible to perform a peripheral neurorrhaphy in rats through video system magnification, but with a longer surgical time.


2016 ◽  
Vol 33 (03) ◽  
pp. 158-162 ◽  
Author(s):  
Rui Sergio ◽  
Monteiro de Barros ◽  
Marcus Brito ◽  
Rafael Leal ◽  
Marcelo Sabbá ◽  
...  

2021 ◽  
Vol 27 (1) ◽  
Author(s):  
J. M. Lazarus ◽  
M. Ncube

Abstract Background Technology currently used for surgical endoscopy was developed and is manufactured in high-income economies. The cost of this equipment makes technology transfer to resource constrained environments difficult. We aimed to design an affordable wireless endoscope to aid visualisation during rigid endoscopy and minimally invasive surgery (MIS). The initial prototype aimed to replicate a 4-mm lens used in rigid cystoscopy. Methods Focus was placed on using open-source resources to develop the wireless endoscope to significantly lower the cost and make the device accessible for resource-constrained settings. An off the shelf miniature single-board computer module was used because of its low cost (US$10) and its ability to handle high-definition (720p) video. Open-source Linux software made monitor mode (“hotspot”) wireless video transmission possible. A 1280 × 720 pixel high-definition tube camera was used to generate the video signal. Video is transmitted to a standard laptop computer for display. Bench testing included latency of wireless digital video transmission. Comparison to industry standard wired cameras was made including weight and cost. The battery life was also assessed. Results In comparison with industry standard cystoscope lens, wired camera, video processing unit and light source, the prototype costs substantially less. (US$ 230 vs 28 000). The prototype is light weight (184 g), has no cables tethering and has acceptable battery life (of over 2 h, using a 1200 mAh battery). The camera transmits video wirelessly in near real time with only imperceptible latency of < 200 ms. Image quality is high definition at 30 frames per second. Colour rendering is good, and white balancing is possible. Limitations include the lack of a zoom. Conclusion The novel wireless endoscope camera described here offers equivalent high-definition video at a markedly reduced cost to contemporary industry wired units and could contribute to making minimally invasive surgery possible in resource-constrained environments.


Author(s):  
Ting Zhang ◽  
Lin Wang ◽  
Lingmei Kong ◽  
Chengxi Zhang ◽  
Haiyong He ◽  
...  

Metal halide perovskite light-emitting diodes (PeLEDs) have aroused extensive attention due to their high color purity, wide color gamut, low-cost solution processability, showing great potential for application in next-generation high-definition...


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3270 ◽  
Author(s):  
Hao Cai ◽  
Zhaozheng Hu ◽  
Gang Huang ◽  
Dunyao Zhu ◽  
Xiaocong Su

Self-localization is a crucial task for intelligent vehicles. Existing localization methods usually require high-cost IMU (Inertial Measurement Unit) or expensive LiDAR sensors (e.g., Velodyne HDL-64E). In this paper, we propose a low-cost yet accurate localization solution by using a custom-level GPS receiver and a low-cost camera with the support of HD map. Unlike existing HD map-based methods, which usually requires unique landmarks within the sensed range, the proposed method utilizes common lane lines for vehicle localization by using Kalman filter to fuse the GPS, monocular vision, and HD map for more accurate vehicle localization. In the Kalman filter framework, the observations consist of two parts. One is the raw GPS coordinate. The other is the lateral distance between the vehicle and the lane, which is computed from the monocular camera. The HD map plays the role of providing reference position information and correlating the local lateral distance from the vision and the GPS coordinates so as to formulate a linear Kalman filter. In the prediction step, we propose using a data-driven motion model rather than a Kinematic model, which is more adaptive and flexible. The proposed method has been tested with both simulation data and real data collected in the field. The results demonstrate that the localization errors from the proposed method are less than half or even one-third of the original GPS positioning errors by using low cost sensors with HD map support. Experimental results also demonstrate that the integration of the proposed method into existing ones can greatly enhance the localization results.


Sign in / Sign up

Export Citation Format

Share Document