ROBOMECH Journal
Latest Publications


TOTAL DOCUMENTS

213
(FIVE YEARS 85)

H-INDEX

11
(FIVE YEARS 3)

Published By Springer (Biomed Central Ltd.)

2197-4225, 2197-4225

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ryota Ozaki ◽  
Naoya Sugiura ◽  
Yoji Kuroda

AbstractThis paper presents an EKF (extended Kalman filter) based self-attitude estimation method with a LiDAR DNN (deep neural network) learning landscape regularities. The proposed DNN infers the gravity direction from LiDAR data. The point cloud obtained with the LiDAR is transformed to a depth image to be input to the network. It is pre-trained with large synthetic datasets. They are collected in a flight simulator because various gravity vectors can be easily obtained, although this study focuses not only on UAVs. Fine-tuning with datasets collected with real sensors is done after the pre-training. Data augmentation is processed during the training in order to provide higher general versatility. The proposed method integrates angular rates from a gyroscope and the DNN outputs in an EKF. Static validations are performed to show the DNN can infer the gravity direction. Dynamic validations are performed to show the DNN can be used in real-time estimation. Some conventional methods are implemented for comparison.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Photchara Ratsamee ◽  
Yasushi Mae ◽  
Kazuto Kamiyama ◽  
Mitsuhiro Horade ◽  
Masaru Kojima ◽  
...  

AbstractPeople with disabilities, such as patients with motor paralysis conditions, lack independence and cannot move most parts of their bodies except for their eyes. Supportive robot technology is highly beneficial in supporting these types of patients. We propose a gaze-informed location-based (or gaze-based) object segmentation, which is a core module of successful patient-robot interaction in an object-search task (i.e., a situation when a robot has to search for and deliver a target object to the patient). We have introduced the concepts of gaze tracing (GT) and gaze blinking (GB), which are integrated into our proposed object segmentation technique, to yield the benefit of an accurate visual segmentation of unknown objects in a complex scene. Gaze tracing information can be used as a clue as to where the target object is located in a scene. Then, gaze blinking can be used to confirm the position of the target object. The effectiveness of our proposed method has been demonstrated using a humanoid robot in experiments with different types of highly cluttered scenes. Based on the limited gaze guidance from the user, we achieved an 85% F-score of unknown object segmentation in an unknown environment.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Maozheng Xu ◽  
Taku Senoo ◽  
Takeshi Takaki

AbstractThis paper describes the condition analysis of a multicopter carried with a proposed device for rough terrain landing. Based on a multicopter carried with an electrical robot arm for grasping, we proposed a method to determine whether the skid-carried multicopter can land on an arbitrary slope or not. We established the static model of the entire device, and analyzed the conditions under which the arm and skid can contact the arbitrary plane and the COG (Center of Gravity), which includes the mass of passive skid, multicopter body and each link of the robot arm. Further, we proposed a method to analyze whether the entire device can land stably. By analyzing that the projection of the entire device’s COG is inside or outside the triangle, that comprises the contact point between the device and the uneven ground, we can determine whether the device can land successfully and the condition for capable landing is concluded. After the numerical analysis, the verification experiment is conducted, and by comparing the result of analysis with the experiment, the accuracy of the analysis can be demonstrated.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Kazuto Takashima ◽  
Jo Kobuchi ◽  
Norihiro Kamamichi ◽  
Kentaro Takagi ◽  
Toshiharu Mukai

AbstractIn the present study, we propose a variable-sensitivity force sensor using a shape-memory polymer (SMP), the stiffness of which varies according to the temperature. Since the measurement range and sensitivity can be changed, it is not necessary to replace the force sensor to match the measurement target. Shape-memory polymers are often described as two-phase structures comprising a lower-temperature “glassy” hard phase and a higher-temperature “rubbery” soft phase. The relationship between the applied force and the deformation of the SMP changes depending on the temperature. The proposed sensor consists of strain gauges bonded to an SMP bending beam and senses the applied force by measuring the strain. Therefore, the force measurement range and the sensitivity can be changed according to the temperature. In our previous study, we found that a sensor with one strain gauge and a steel plate had a small error and a large sensitivity range. Therefore, in the present study, we miniaturize this type of sensor. Moreover, in order to describe the viscoelastic behavior more accurately, we propose a transfer function using a generalized Maxwell model. We verify the proposed model experimentally and estimated the parameters by system identification. In addition, we realize miniaturization of the sensor and achieve the same performance as in our previous study. It is shown that the proposed transfer function can capture the viscoelastic behavior of the proposed SMP sensor quite well.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Mitsuhiro Kamezaki ◽  
Yusuke Uehara ◽  
Kohga Azuma ◽  
Shigeki Sugano

AbstractDisaster response robots are expected to perform complicated tasks such as traveling over unstable terrain, climbing slippery steps, and removing heavy debris. To complete such tasks safely, the robots must obtain not only visual-perceptual information (VPI) such as surface shape but also the haptic-perceptual information (HPI) such as surface friction of objects in the environments. VPI can be obtained from laser sensors and cameras. In contrast, HPI can be basically obtained from only the results of physical interaction with the environments, e.g., reaction force and deformation. However, current robots do not have a function to estimate the HPI. In this study, we propose a framework to estimate such physically interactive parameters (PIPs), including hardness, friction, and weight, which are vital parameters for safe robot-environment interaction. For effective estimation, we define the ground (GGM) and object groping modes (OGM). The endpoint of the robot arm, which has a force sensor, actively touches, pushes, rubs, and lifts objects in the environment with a hybrid position/force control, and three kinds of PIPs are estimated from the measured reaction force and displacement of the arm endpoint. The robot finally judges the accident risk based on estimated PIPs, e.g., safe, attentional, or dangerous. We prepared environments that had the same surface shape but different hardness, friction, and weight. The experimental results indicated that the proposed framework could estimate PIPs adequately and was useful to judge the risk and safely plan tasks.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Kentaro Masuyama ◽  
Yoshiyuki Noda ◽  
Yasumi Ito ◽  
Yoshiyuki Kagiyama ◽  
Koichiro Ueki

AbstractThe present study proposes an advanced force display control system for a surgical training simulator with virtual reality. In oral and orthopedic surgeries, a surgeon uses a chisel and mallet for chiseling and cutting hard tissue. To enable the representation of force sensation for the chiseling operation in a virtual training simulator, the force display device has been constructed with the ball-screw mechanism to obtain high stiffness. In addition, two-degrees-of-freedom (2DOF) admittance control has been used to react instantaneously to the impactive force caused by pounding with the mallet. The virtual chiseling operation was realized by the force display device with a single axis in the previous studies. In the current study, we propose the design procedure for the force display control system with the 2DOF admittance control approach to virtual operation in three-dimensional space. Furthermore, we propose the design method for the PD controller with imperfect derivative using frequency characteristics for the 2DOF admittance control system. The efficacy of the proposed control system is verified through the virtual experience from manipulating the chisel using the developed force display device in the current study.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Toshio Takayama ◽  
Yusuke Sumi

AbstractRecently pneumatic-driven soft robots have been widely developed. Usually, the operating principle of this robot is the inflation and deflation of elastic inflatable chambers by air pressure. Some soft robots need rapid and periodic inflation and deflation of their air chambers to generate continuous motion such as progress motion or rotational motion. However, if the soft robot needs to operate far from the air pressure source, long air tubes are required to supply air pressure to its air chambers. As a result, there is a large delay in supplying air pressure to the air chamber, and the motion of the robot slows down. In this paper, we propose a compact device that changes its airflow passages by self-excited motion generated by a supply of continuous airflow. The diameter and the length of the device are 20 and 50 mm, respectively, and can be driven in a small pipe. Our proposed in-pipe mobile robot is connected to the device and can move in a small pipe by dragging the device into it. To apply the device widely to other soft robots, we also discuss a method of adjusting the output pressure and motion frequency.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Makoto Sanada ◽  
Tadashi Matsuo ◽  
Nobutaka Shimada ◽  
Yoshiaki Shirai

AbstractIn this study, a method for a robot to recall multiple grasping methods for a given object is proposed. The aim of this study was for robots to learn grasping methods for new objects by observing the grasping activities of humans in daily life without special instructions. For this setting, only one grasping motion was observed for an object at a time, and it was never known whether other grasping methods were possible for the object, although supervised learning generally requires all possible answers for each training input. The proposed method gives a solution for that learning situations by employing a convolutional neural network with automatic clustering of the observed grasping method. In the proposed method, the grasping methods are clustered during the process of learning of the grasping position. The method first recalls grasping positions and the network estimates the multi-channel heatmap such that each channel heatmap indicates one grasping position, then checks the graspability for each estimated position. Finally, the method recalls the hand shapes based on the estimated grasping position and the object’s shape. This paper describes the results of recalling multiple grasping methods and demonstrates the effectiveness of the proposed method.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Toshiaki Nishio ◽  
Yuichiro Yoshikawa ◽  
Takamasa Iio ◽  
Mariko Chiba ◽  
Taichi Asami ◽  
...  

AbstractThe number of isolated elderly people with few opportunities to talk to other people is currently increasing. Research is ongoing to develop talking robots for addressing the situation. The aim of the present study was to develop a talking robot that could converse with elderly people over an extended period. To enable long-duration conversation, we added a previously proposed active listening function for twining the robot dialogue system to prompt the user to say something. To verify the effectiveness of this function, a comparative experiment was performed using the proposed robot system and a control system with identical functions except the active listening function. The results showed that the conversation of the elderly subjects with the proposed robot system was significantly more than that with the control system. The capability of the developed robot system was further demonstrated in a nursing home for the elderly, where its conversation durations with different residents were measured. The results revealed that the robot could converse for more than 30 min with more than half of the elderly subjects. These results indicate that the additional function of the proposed talking robot system would enable elderly people to talk over longer periods of time.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Masahiro Inagawa ◽  
Toshinobu Takei ◽  
Etsujiro Imanishi

AbstractMany cooking robots have been developed in response to the increasing demand for such robots. However, most existing robots must be programmed according to specific recipes to enable cooking using robotic arms, which requires considerable time and expertise. Therefore, this paper proposes a method to allow a robot to cook by analyzing recipes available on the internet, without any recipe-specific programming. The proposed method can be used to plan robot motion based on the analysis of the cooking procedure for a recipe. We developed a cooking robot to execute the proposed method and evaluated the effectiveness of this approach by analyzing 50 recipes. More than 25 recipes could be cooked using the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document