Forward kinematic solution and its applications for a 3-DOF parallel kinematic machine (PKM) with a passive link

Robotica ◽  
2006 ◽  
Vol 24 (5) ◽  
pp. 549-555 ◽  
Author(s):  
Z. M. Bi ◽  
S. Y. T. Lang

In this paper, a 3-DOF parallel kinematic machine (PKM) with a passive link is introduced. The forward kinemaic model is established, and a new technique is proposed to solve this model. The developed forward kinematic solver (FKS) is employed in two new applications: the determination of joint workspace and sensor-based real-time monitoring. A joint workspace concept is proposed for the optimization of a PKM structure. It is defined as a reachable area in the joint coordinate system under given ranges of active joint motions. The larger a joint workspace is, the higher utilization of joint motion capacity is. Sensor-based monitoring is applied in real-time system operation, by which a remote user can monitor a PKM through Internet based on the feedbacks of the joint encoders or the on-site stereo camera.

Author(s):  
Wenqiang Chen ◽  
Lin Chen ◽  
Meiyi Ma ◽  
Farshid Salemi Parizi ◽  
Shwetak Patel ◽  
...  

Wearable devices, such as smartwatches and head-mounted devices (HMD), demand new input devices for a natural, subtle, and easy-to-use way to input commands and text. In this paper, we propose and investigate ViFin, a new technique for input commands and text entry, which harness finger movement induced vibration to track continuous micro finger-level writing with a commodity smartwatch. Inspired by the recurrent neural aligner and transfer learning, ViFin recognizes continuous finger writing, works across different users, and achieves an accuracy of 90% and 91% for recognizing numbers and letters, respectively. We quantify our approach's accuracy through real-time system experiments in different arm positions, writing speeds, and smartwatch position displacements. Finally, a real-time writing system and two user studies on real-world tasks are implemented and assessed.


Author(s):  
J. Presedo ◽  
J. Vila ◽  
S. Barro ◽  
R. Ruiz ◽  
F. Palacios

2019 ◽  
Vol 20 (2) ◽  
pp. 142-162
Author(s):  
Hanene Elleuch ◽  
Ali Wali

In this paper, a novel real-time system to control mobile devices, in unexpected situations like driving, cooking and practicing sports, based on eyes and hand gestures is proposed. The originality of the proposed system is that it uses a real-time video streaming captured by the front-facing camera of the device. To this end, three principal modules are charged to recognize eyes gestures, hand gestures and the fusion of these motions. Four contributions are presented in this paper. First, the proposition of the fuzzy inference system in the purpose of determination of eyes gestures. Second, a new database has been collected that is used in the classification of open and closed hand gesture. Third, two descriptors have been combined to have boosted classifiers that can detect hands gestures based on Adaboost detector. Fourth, the eyes and hand gestures are erged to command the mobile devices based on the decision tree classifier. Different experiments were assessed to show that the proposed system is efficient and competitive with other existing systems by achieving a recall of 76.53%, 98 % and 99% for eyes gesture recognition, detection of fist gesture, detection of palm gesture respectively and a success rate of 88% for eyes and hands gestures correlation. ABSTRAK:  Kajian ini mencadangkan satu sistem masa nyata bagi mengawal peranti mudah alih, dalam keadaan tak terjangka seperti sedang memandu, memasak dan bersukan, berdasarkan gerakan mata dan tangan. Kelainan sistem yang dicadangkan ini adalah ia menggunakan masa nyata video yang diambil daripada peranti kamera hadapan. Oleh itu, tiga modul utama ini telah ditugaskan bagi mengenal pasti isyarat mata, tangan dan gabungan kedua-dua gerakan. Empat sumbangan telah dibentangkan dalam kajian ini. Anggaran pertama bahawa isyarat gerak mata mempengaruhi sistem secara kabur. Kedua, pangkalan data baru telah dikumpulkan bagi pengelasan isyarat tangan terbuka dan tertutup. Ketiga, dua pemerihal data telah digabungkan bagi merangsangkan pengelasan yang dapat mengesan isyarat tangan berdasarkan pengesan Adaboost. Keempat, gerakan mata dan tangan telah digunakan bagi mengarah peranti mudah alih berdasarkan pengelasan carta keputusan. Eksperimen berbeza telah dijalankan bagi membuktikan bahawa sistem yang dicadang adalah berkesan dan berdaya saing dengan sistem sedia ada. Keputusan menunjukkan 76.53%, 98% dan 99% masing-masing telah dikesan pada pengesanan gerak isyarat mata, genggaman tangan dan tapak tangan, dengan kadar 88% berjaya mengesan gerak isyarat mata dan tangan.


Author(s):  
Lisa-Marie Vortmann ◽  
Felix Putze

Adding attention-awareness to an Augmented Reality setting by using a Brain-Computer Interface promises many interesting new applications and improved usability. The possibly complicated setup and relatively long training period of EEG-based BCIs however, reduce this positive effect immensely. In this study, we aim at finding solutions for person-independent, training-free BCI integration into AR to classify internally and externally directed attention. We assessed several different classifier settings on a dataset of 14 participants consisting of simultaneously recorded EEG and eye tracking data. For this, we compared the classification accuracies of a linear algorithm, a non-linear algorithm, and a neural net that were trained on a specifically generated feature set, as well as a shallow neural net for raw EEG data. With a real-time system in mind, we also tested different window lengths of the data aiming at the best payoff between short window length and high classification accuracy. Our results showed that the shallow neural net based on 4-second raw EEG data windows was best suited for real-time person-independent classification. The accuracy for the binary classification of internal and external attention periods reached up to 88% accuracy with a model that was trained on a set of selected participants. On average, the person-independent classification rate reached 60%. Overall, the high individual differences could be seen in the results. In the future, further datasets are necessary to compare these results before optimizing a real-time person-independent attention classifier for AR.


2015 ◽  
Vol 2 (1) ◽  
pp. 35-41
Author(s):  
Rivan Risdaryanto ◽  
Houtman P. Siregar ◽  
Dedy Loebis

The real-time system is now used on many fields, such as telecommunication, military, information system, evenmedical to get information quickly, on time and accurate. Needless to say, a real-time system will always considerthe performance time. In our application, we define the time target/deadline, so that the system should execute thewhole tasks under predefined deadline. However, if the system failed to finish the tasks, it will lead to fatal failure.In other words, if the system cannot be executed on time, it will affect the subsequent tasks. In this paper, wepropose a real-time system for sending data to find effectiveness and efficiency. Sending data process will beconstructed in MATLAB and sending data process has a time target as when data will send.


Vestnik MEI ◽  
2018 ◽  
Vol 5 (5) ◽  
pp. 73-78
Author(s):  
Igor В. Fominykh ◽  
◽  
Sergey V. Romanchuk ◽  
Nikolay Р. Alekseev ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document