fisheye lens
Recently Published Documents


TOTAL DOCUMENTS

179
(FIVE YEARS 54)

H-INDEX

14
(FIVE YEARS 7)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Muhammad Ali Babar Abbasi ◽  
Rafay I. Ansari ◽  
Gabriel G. Machado ◽  
Vincent F. Fusco

AbstractAntenna arrays and multi-antenna systems are essential in beyond 5G wireless networks for providing wireless connectivity, especially in the context of Internet-of-Everything. To facilitate this requirement, beamforming technology is emerging as a key enabling solution for adaptive on-demand wireless coverage. Despite digital beamforming being the primary choice for adaptive wireless coverage, a set of applications rely on pure analogue beamforming approaches, e.g., in point-to-multi point and physical-layer secure communication links. In this work, we present a novel scalable analogue beamforming hardware architecture that is capable of adaptive 2.5-dimensional beam steering and beam shaping to fulfil the coverage requirements. Beamformer hardware comprises of a finite size Maxwell fisheye lens used as a scalable feed network solution for a semi-circular array of monopole antennas. This unique hardware architecture enables a flexibility of using 2 to 8 antenna elements. Beamformer development stages are presented while experimental beam steering and beam shaping results show good agreement with the estimated performance.


2021 ◽  
Author(s):  
◽  
Peony Pui Yue Au

<p>The objective of this research was to determine the adequacy of Android devices capturing High Dynamic Range (HDR) photography, and using it as a tool for daylight analysis in New Zealand’s commercial building stock. This study was conducted with an Android Smartphone and later an Android Tablet, employing the use of a US$50 magnetic fisheye lens. The overall aim of this research was to evaluate whether an inexpensive programmable data acquisition system could provide meaningful and useful luminance data.  To complete this research, the adequacy of computer simulation using HDR photography of the real horizontal and vertical skies was explored. Using the method documented in this research, the luminance distribution of the building interiors could then be mapped accurately in daylight simulations.  The BRANZ Building Energy End-Use Study (BEES) team currently have one internal lighting measurement point, which records light levels in each of more than 100 commercial buildings randomly selected to be representative of commercial buildings in New Zealand. The HOBO U12 data logger typically records the environmental data on a desktop within the main area of the monitored premises. The HOBO data loggers only provide the environmental measurement of that specific location and do not provide the researcher the daylight distribution of the whole space. Using the data collected by BEES, a thesis was developed to explore the utility of HDR imaging as a supplement to the use of a single internal light measurement in the analysis of daylight potential in New Zealand’s commercial building stock.  Three buildings were randomly selected from the BEES targeted strata five database to be monitored over a one day period. Within each building, at least three rooms were studied, all facing different orientations. The pilot study and the first two buildings monitored employed the use of a Motorola Defy Smartphone to capture the low dynamic range (LDR) photographs of each scene using both the HDR Camera application available from the Android Google Play Application Store, and the built-in camera application that came with the Smartphone. The vertical (by pressing the Smartphone hard up against the window) and horizontal (from the ground) skies were also captured simultaneously as only one device was available at each monitored building and to ensure consistency in each building. These photographs were fused using an HDR software called Photosphere, into a single HDR image.  However, before the HDR images could be generated to contain accurate luminance data within the images, a camera response curve is required to be generated. A camera response curve is unique to each device and only needs to be generated once and can be generated using Photosphere. Unfortunately, a camera response curve could not be generated for the Motorola Defy Smartphone and through various experimentations and tests in both the lighting laboratory and in-field, it was discovered that this had nothing to do with the EXIF data contained within the photographs captured as originally thought, but the JPEG image format itself. This resulted in a generic camera response curve, from Photosphere, being used for the pilot study and the first two monitored buildings. For the final building that was monitored, a Galaxy Note Tablet was used. A camera response curve for this device could be easily generated using Photosphere.  The pilot study and three monitored buildings were geometrically simulated using Google SketchUp 8 and were then exported in to Radiance Lighting Simulation and Rendering System using the su2rad plug-in. The files were then edited in Ecotect™ Radiance Control Panel, after which the real and simulated images were compared using HDRShop and RadDisplay.   The four comparison methods were used to compare the real and simulated data were pixel to pixel comparison; section to section pixel comparison; surface to surface comparison and visual field comparison. Of the four methods used the first two were visual based comparisons, whereas the latter two were numerical, which employ the use of a calculation method to calculate the relative error percentages. The biggest problem that arose from the visual comparisons was the geometrical misalignment due to the use of a fisheye lens and only provided the luminance difference ranging from a scale of 0 cd/m2 to 50 cd/m2. The numerical comparison methods provided a 60% correlation between real and simulated data.  It was concluded that, depending on the Android device used, HDR photographs are able to provide reliable images that contain accurate luminance data when a camera response curve for the device could be generated.</p>


2021 ◽  
Author(s):  
◽  
Peony Pui Yue Au

<p>The objective of this research was to determine the adequacy of Android devices capturing High Dynamic Range (HDR) photography, and using it as a tool for daylight analysis in New Zealand’s commercial building stock. This study was conducted with an Android Smartphone and later an Android Tablet, employing the use of a US$50 magnetic fisheye lens. The overall aim of this research was to evaluate whether an inexpensive programmable data acquisition system could provide meaningful and useful luminance data.  To complete this research, the adequacy of computer simulation using HDR photography of the real horizontal and vertical skies was explored. Using the method documented in this research, the luminance distribution of the building interiors could then be mapped accurately in daylight simulations.  The BRANZ Building Energy End-Use Study (BEES) team currently have one internal lighting measurement point, which records light levels in each of more than 100 commercial buildings randomly selected to be representative of commercial buildings in New Zealand. The HOBO U12 data logger typically records the environmental data on a desktop within the main area of the monitored premises. The HOBO data loggers only provide the environmental measurement of that specific location and do not provide the researcher the daylight distribution of the whole space. Using the data collected by BEES, a thesis was developed to explore the utility of HDR imaging as a supplement to the use of a single internal light measurement in the analysis of daylight potential in New Zealand’s commercial building stock.  Three buildings were randomly selected from the BEES targeted strata five database to be monitored over a one day period. Within each building, at least three rooms were studied, all facing different orientations. The pilot study and the first two buildings monitored employed the use of a Motorola Defy Smartphone to capture the low dynamic range (LDR) photographs of each scene using both the HDR Camera application available from the Android Google Play Application Store, and the built-in camera application that came with the Smartphone. The vertical (by pressing the Smartphone hard up against the window) and horizontal (from the ground) skies were also captured simultaneously as only one device was available at each monitored building and to ensure consistency in each building. These photographs were fused using an HDR software called Photosphere, into a single HDR image.  However, before the HDR images could be generated to contain accurate luminance data within the images, a camera response curve is required to be generated. A camera response curve is unique to each device and only needs to be generated once and can be generated using Photosphere. Unfortunately, a camera response curve could not be generated for the Motorola Defy Smartphone and through various experimentations and tests in both the lighting laboratory and in-field, it was discovered that this had nothing to do with the EXIF data contained within the photographs captured as originally thought, but the JPEG image format itself. This resulted in a generic camera response curve, from Photosphere, being used for the pilot study and the first two monitored buildings. For the final building that was monitored, a Galaxy Note Tablet was used. A camera response curve for this device could be easily generated using Photosphere.  The pilot study and three monitored buildings were geometrically simulated using Google SketchUp 8 and were then exported in to Radiance Lighting Simulation and Rendering System using the su2rad plug-in. The files were then edited in Ecotect™ Radiance Control Panel, after which the real and simulated images were compared using HDRShop and RadDisplay.   The four comparison methods were used to compare the real and simulated data were pixel to pixel comparison; section to section pixel comparison; surface to surface comparison and visual field comparison. Of the four methods used the first two were visual based comparisons, whereas the latter two were numerical, which employ the use of a calculation method to calculate the relative error percentages. The biggest problem that arose from the visual comparisons was the geometrical misalignment due to the use of a fisheye lens and only provided the luminance difference ranging from a scale of 0 cd/m2 to 50 cd/m2. The numerical comparison methods provided a 60% correlation between real and simulated data.  It was concluded that, depending on the Android device used, HDR photographs are able to provide reliable images that contain accurate luminance data when a camera response curve for the device could be generated.</p>


Author(s):  
Haitao Dai ◽  
Yamin Xing ◽  
Maozhou Chen ◽  
Meini Gao ◽  
Ziyang Guo ◽  
...  
Keyword(s):  

Author(s):  
Hsuan-Jui Su ◽  
Hsi-Tseng Chou ◽  
Hao-Ju Huang ◽  
Hsien-Kwei Ho
Keyword(s):  

APL Photonics ◽  
2021 ◽  
Vol 6 (9) ◽  
pp. 096104
Author(s):  
Daniel Headland ◽  
Andreas Kurt Klein ◽  
Masayuki Fujita ◽  
Tadao Nagatsuma

2021 ◽  
Vol 37 (4) ◽  
pp. 312-321
Author(s):  
Young-Hwa Jung ◽  
Gyuho Kim ◽  
Woo Sik Yoo

Underwater archaeology relies heavily on photography and video image recording during surveillances and excavations like ordinary archaeological studies on land. All underwater images suffer poor image quality and distortions due to poor visibility, low contrast and blur, caused by differences in refractive indices of water and air, properties of selected lenses and shapes of viewports. In the Yellow Sea (between mainland China and the Korean peninsula), the visibility underwater is far less than 1 m, typically in the range of 30 cm to 50 cm, on even a clear day, due to very high turbidity. For photographing 1 m x 1 m grids underwater, a very wide view angle (180o) fisheye lens with an 8 mm focal length is intentionally used despite unwanted severe barrel-shaped image distortion, even with a dome port camera housing. It is very difficult to map wide underwater archaeological excavation sites by combining severely distorted images. Development of practical compensation methods for distorted underwater images acquired through the fisheye lens is strongly desired. In this study, the source of image distortion in underwater photography is investigated. We have identified the source of image distortion as the mismatching, in optical axis and focal points, between dome port housing and fisheye lens. A practical image distortion compensation method, using customized image processing software, was explored and verified using archived underwater excavation images for effectiveness in underwater archaeological applications. To minimize unusable area due to severe distortion after distortion compensation, practical underwater photography guidelines are suggested.


Author(s):  
Jin Zhao ◽  
Li-Zheng Yin ◽  
Feng-Yuan Han ◽  
Yi-Dong Wang ◽  
Tie-Jun Huang ◽  
...  
Keyword(s):  

Author(s):  
Yosoon Choi ◽  
Bugyun Kang ◽  
Minjung Kang ◽  
Mingi Kim ◽  
Sangbeom Lee ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document