A donor-acceptor luminogen serving as a haptic film sensor for identity recognition

2021 ◽  
pp. 110034
Author(s):  
Jiangting Hu ◽  
Miaomiao Wu ◽  
Xinyi Zhao ◽  
Yuai Duan ◽  
Jing Yuan ◽  
...  
1980 ◽  
Vol 41 (7) ◽  
pp. 707-712 ◽  
Author(s):  
A. Poure ◽  
G. Aguero ◽  
G. Masse ◽  
J.P. Aicardi

2008 ◽  
Author(s):  
Derck Schlettwein ◽  
Robin Knecht ◽  
Dominik Klaus ◽  
Christopher Keil ◽  
Günter Schnurpfeil

1989 ◽  
Vol 162 ◽  
Author(s):  
J. A. Freitas ◽  
S. G. Bishop

ABSTRACTThe temperature and excitation intensity dependence of photoluminescence (PL) spectra have been studied in thin films of SiC grown by chemical vapor deposition on Si (100) substrates. The low power PL spectra from all samples exhibited a donor-acceptor pair PL band which involves a previously undetected deep acceptor whose binding energy is approximately 470 meV. This deep acceptor is found in every sample studied independent of growth reactor, suggesting the possibility that this background acceptor is at least partially responsible for the high compensation observed in Hall effect studies of undoped films of cubic SiC.


2003 ◽  
Vol 773 ◽  
Author(s):  
Aaron R. Clapp ◽  
Igor L. Medintz ◽  
J. Matthew Mauro ◽  
Hedi Mattoussi

AbstractLuminescent CdSe-ZnS core-shell quantum dot (QD) bioconjugates were used as energy donors in fluorescent resonance energy transfer (FRET) binding assays. The QDs were coated with saturating amounts of genetically engineered maltose binding protein (MBP) using a noncovalent immobilization process, and Cy3 organic dyes covalently attached at a specific sequence to MBP were used as energy acceptor molecules. Energy transfer efficiency was measured as a function of the MBP-Cy3/QD molar ratio for two different donor fluorescence emissions (different QD core sizes). Apparent donor-acceptor distances were determined from these FRET studies, and the measured distances are consistent with QD-protein conjugate dimensions previously determined from structural studies.


2020 ◽  
Vol 64 (4) ◽  
pp. 40404-1-40404-16
Author(s):  
I.-J. Ding ◽  
C.-M. Ruan

Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.


Sign in / Sign up

Export Citation Format

Share Document