Self-healing, Mechanically Robust, and 3D Printable Ionogel for Highly Sensitive and Long-Term Reliable Ionotronics

Author(s):  
Manwen Zhang ◽  
Xinglin Tao ◽  
Ran Yu ◽  
Yangyang He ◽  
Xinpan Li ◽  
...  

Flexible sensors which can transduce various stimuli (e.g., strain, pressure, temperature) into electrical signals are highly in demand due to the development of human-machine interaction. However, it is still a...

Soft Matter ◽  
2020 ◽  
Author(s):  
Youqiang Li ◽  
Chuang Liu ◽  
Xue Lv ◽  
shulin sun

Hydrogel-based flexible strain sensors for personal health monitoring and human-machine interaction have attracted wide interest of researchers. In this paper, hydrophobic association and nanocomposite conductive hydrogels were successfully prepared by...


Author(s):  
Qiang Zou ◽  
Fengrui Yang ◽  
Yaodong Wang

Abstract The wearable sensors for softness measuring are emerging as a solution of softness perception, which is an intrinsic function of human skin, for electronic skin and human-machine interaction. However, these wearable sensors suffer from a key challenge: the modulus of an object can not be characterized directly, which originates from the complicated transduction mechanism. To address this key challenge, we developed a flexible and wearable modulus sensor that can simultaneously measure the pressure and modulus without mutual interference. The modulus sensing was realized by merging the electrostatic capacitance response from the pressure sensor and the ionic capacitance response from the indentation sensor. Via the optimized structure, our sensor exhibits high modulus sensitivity of 1.9 × 102 in 0.06 MPa, a fast dynamic response time of 100 ms, and high mechanical robustness for over 2500 cycles. We also integrated the sensor onto a prosthetic hand and surgical probe to demonstrate its capability for pressure and modulus sensing. This work provides a new strategy for modulus measurement, which has great potential in softness sensing and medical application.


2019 ◽  
Vol 16 (04) ◽  
pp. 1950017
Author(s):  
Sheng Liu ◽  
Yangqing Wang ◽  
Fengji Dai ◽  
Jingxiang Yu

Motion detection and object tracking play important roles in unsupervised human–machine interaction systems. Nevertheless, the human–machine interaction would become invalid when the system fails to detect the scene objects correctly due to occlusion and limited field of view. Thus, robust long-term tracking of scene objects is vital. In this paper, we present a 3D motion detection and long-term tracking system with simultaneous 3D reconstruction of dynamic objects. In order to achieve the high precision motion detection, an optimization framework with a novel motion pose estimation energy function is provided in the proposed method by which the 3D motion pose of each object can be estimated independently. We also develop an accurate object-tracking method which combines 2D visual information and depth. We incorporate a novel boundary-optimization segmentation based on 2D visual information and depth to improve the robustness of tracking significantly. Besides, we also introduce a new fusion and updating strategy in the 3D reconstruction process. This strategy brings higher robustness to 3D motion detection. Experiments results show that, for synthetic sequences, the root-mean-square error (RMSE) of our system is much smaller than Co-Fusion (CF); our system performs extremely well in 3D motion detection accuracy. In the case of occlusion or out-of-view on real scene data, CF will suffer the loss of tracking or object-label changing, by contrast, our system can always keep the robust tracking and maintain the correct labels for each dynamic object. Therefore, our system is robust to occlusion and out-of-view application scenarios.


Author(s):  
J. B. Manchon ◽  
Mercedes Bueno ◽  
Jordan Navarro

Objective Automated driving is becoming a reality, and such technology raises new concerns about human–machine interaction on road. This paper aims to investigate factors influencing trust calibration and evolution over time. Background Numerous studies showed trust was a determinant in automation use and misuse, particularly in the automated driving context. Method Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs. Distrustful) on drivers’ behaviors and trust calibration during two sessions of simulated automated driving. The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human–machine early interactions. Trust was assessed over time through questionnaires. Drivers’ visual behaviors and take-over performances during an unplanned take-over request were also investigated. Results Results showed an increase of trust over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style. Conclusion Trust in automated driving increases rapidly when drivers’ experience such a system. Initial level of trust seems to be crucial in further trust calibration and modulate the effect of automation performance. Long-term trust evolutions suggest that experience modify drivers’ mental model about automated driving systems. Application In the automated driving context, trust calibration is a decisive question to guide such systems’ proper utilization, and road safety.


2020 ◽  
Vol 8 (29) ◽  
pp. 14778-14787
Author(s):  
Xurui Hu ◽  
Tao Huang ◽  
Zhiduo Liu ◽  
Gang Wang ◽  
Da Chen ◽  
...  

Graphene E-textile exhibits excellent electrical conductivity, breathability, and washability. The application of a graphene E-textile on a wearable remote-control system by sewing the pressure sensors into the five fingers of a glove to invoke a human–machine interaction.


Author(s):  
T. M. Seed ◽  
M. H. Sanderson ◽  
D. L. Gutzeit ◽  
T. E. Fritz ◽  
D. V. Tolle ◽  
...  

The developing mammalian fetus is thought to be highly sensitive to ionizing radiation. However, dose, dose-rate relationships are not well established, especially the long term effects of protracted, low-dose exposure. A previous report (1) has indicated that bred beagle bitches exposed to daily doses of 5 to 35 R 60Co gamma rays throughout gestation can produce viable, seemingly normal offspring. Puppies irradiated in utero are distinguishable from controls only by their smaller size, dental abnormalities, and, in adulthood, by their inability to bear young.We report here our preliminary microscopic evaluation of ovarian pathology in young pups continuously irradiated throughout gestation at daily (22 h/day) dose rates of either 0.4, 1.0, 2.5, or 5.0 R/day of gamma rays from an attenuated 60Co source. Pups from non-irradiated bitches served as controls. Experimental animals were evaluated clinically and hematologically (control + 5.0 R/day pups) at regular intervals.


2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Sign in / Sign up

Export Citation Format

Share Document