Manufacturing of a Human’s Hand Prosthesis with Electronic Movable Phalanges Based on a CT Image: An Amputation Case

Author(s):  
Juan Alfonso Beltrán-Fernández ◽  
Itzel Alejandrina Aguirre Hernández ◽  
Itzel Bantle-Chávez ◽  
Carolina Alvarado-Moreno ◽  
Luis Héctor Hernández-Gómez ◽  
...  
Keyword(s):  
Ct Image ◽  
2015 ◽  
Vol 54 (06) ◽  
pp. 247-254 ◽  
Author(s):  
A. Kapfhammer ◽  
T. Winkens ◽  
T. Lesser ◽  
A. Reissig ◽  
M. Steinert ◽  
...  

SummaryAim: To retrospectively evaluate the feasibility and value of CT-CT image fusion to assess the shift of peripheral lung cancers with/-out chest wall infiltration, comparing computed tomography acquisitions in shallow-breathing (SB-CT) and deep-inspiration breath-hold (DIBH-CT) in patients undergoing FDG-PET/ CT for lung cancer staging. Methods: Image fusion of SB-CT and DIBH-CT was performed with a multimodal workstation used for nuclear medicine fusion imaging. The distance of intrathoracic landmarks and the positional shift of tumours were measured using semitransparent overlay of both CT series. Statistical analyses were adjusted for confounders of tumour infiltration. Cutoff levels were calculated for prediction of no-/infiltration. Results: Lateral pleural recessus and diaphragm showed the largest respiratory excursions. Infiltrating lung cancers showed more limited respiratory shifts than non-infiltrating tumours. A large respiratory tumour-motility accurately predicted non-infiltration. However, the tumour shifts were limited and variable, limiting the accuracy of prediction. Conclusion: This pilot fusion study proved feasible and allowed a simple analysis of the respiratory shifts of peripheral lung tumours using CT-CT image fusion in a PET/CT setting. The calculated cutoffs were useful in predicting the exclusion of chest wall infiltration but did not accurately predict tumour infiltration. This method can provide additional qualitative information in patients with lung cancers with contact to the chest wall but unclear CT evidence of infiltration undergoing PET/CT without the need of additional investigations. Considering the small sample size investigated, further studies are necessary to verify the obtained results.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Author(s):  
Juan Sebastian Cuellar ◽  
Dick Plettenburg ◽  
Amir A Zadpoor ◽  
Paul Breedveld ◽  
Gerwin Smit

Various upper-limb prostheses have been designed for 3D printing but only a few of them are based on bio-inspired design principles and many anatomical details are not typically incorporated even though 3D printing offers advantages that facilitate the application of such design principles. We therefore aimed to apply a bio-inspired approach to the design and fabrication of articulated fingers for a new type of 3D printed hand prosthesis that is body-powered and complies with basic user requirements. We first studied the biological structure of human fingers and their movement control mechanisms in order to devise the transmission and actuation system. A number of working principles were established and various simplifications were made to fabricate the hand prosthesis using a fused deposition modelling (FDM) 3D printer with dual material extrusion. We then evaluated the mechanical performance of the prosthetic device by measuring its ability to exert pinch forces and the energy dissipated during each operational cycle. We fabricated our prototypes using three polymeric materials including PLA, TPU, and Nylon. The total weight of the prosthesis was 92 g with a total material cost of 12 US dollars. The energy dissipated during each cycle was 0.380 Nm with a pinch force of ≈16 N corresponding to an input force of 100 N. The hand is actuated by a conventional pulling cable used in BP prostheses. It is connected to a shoulder strap at one end and to the coupling of the whiffle tree mechanism at the other end. The whiffle tree mechanism distributes the force to the four tendons, which bend all fingers simultaneously when pulled. The design described in this manuscript demonstrates several bio-inspired design features and is capable of performing different grasping patterns due to the adaptive grasping provided by the articulated fingers. The pinch force obtained is superior to other fully 3D printed body-powered hand prostheses, but still below that of conventional body powered hand prostheses. We present a 3D printed bio-inspired prosthetic hand that is body-powered and includes all of the following characteristics: adaptive grasping, articulated fingers, and minimized post-printing assembly. Additionally, the low cost and low weight make this prosthetic hand a worthy option mainly in locations where state-of-the-art prosthetic workshops are absent.


2016 ◽  
Vol 35 (4) ◽  
pp. 299-303 ◽  
Author(s):  
S. Nayak ◽  
P.K. Lenka ◽  
A. Equebal ◽  
A. Biswas
Keyword(s):  

2021 ◽  
Author(s):  
Guanghui Fu ◽  
Jianqiang Li ◽  
Ruiqian Wang ◽  
Yue Ma ◽  
Yueda Chen
Keyword(s):  
Ct Image ◽  
Brain Ct ◽  

Sign in / Sign up

Export Citation Format

Share Document