scholarly journals Erratum: “Acoustic Lock: Position and orientation trapping of non-spherical sub-wavelength particles in mid-air using a single-axis acoustic levitator” [Appl. Phys. Lett. 113, 054101 (2018)]

2021 ◽  
Vol 119 (6) ◽  
pp. 069901
Author(s):  
L. Cox ◽  
A. Croxford ◽  
B. W. Drinkwater ◽  
A. Marzo
PIERS Online ◽  
2005 ◽  
Vol 1 (1) ◽  
pp. 37-41 ◽  
Author(s):  
Pavel A. Belov ◽  
C. R. Simovski

Author(s):  
Marco A.B. Andrade ◽  
Flávio Buiochi ◽  
Julio C. Adamowski

Nanophotonics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 897-903 ◽  
Author(s):  
Oleksandr Buchnev ◽  
Alexandr Belosludtsev ◽  
Victor Reshetnyak ◽  
Dean R. Evans ◽  
Vassili A. Fedotov

AbstractWe demonstrate experimentally that Tamm plasmons in the near infrared can be supported by a dielectric mirror interfaced with a metasurface, a discontinuous thin metal film periodically patterned on the sub-wavelength scale. More crucially, not only do Tamm plasmons survive the nanopatterning of the metal film but they also become sensitive to external perturbations as a result. In particular, by depositing a nematic liquid crystal on the outer side of the metasurface, we were able to red shift the spectral position of Tamm plasmon by 35 nm, while electrical switching of the liquid crystal enabled us to tune the wavelength of this notoriously inert excitation within a 10-nm range.


2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.


2021 ◽  
Vol 197 ◽  
pp. 106308
Author(s):  
Yijie Liu ◽  
Liang Jin ◽  
Hongfa Wang ◽  
Dongying Liu ◽  
Yingjing Liang

Sign in / Sign up

Export Citation Format

Share Document