human vision system
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 18)

H-INDEX

8
(FIVE YEARS 1)

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1443
Author(s):  
Sandip Dutta ◽  
Martha Wilson

Machine vision has been thoroughly studied in the past, but research thus far has lacked an engineering perspective on human vision. This paper addresses the observed and hypothetical neural behavior of the brain in relation to the visual system. In a human vision system, visual data are collected by photoreceptors in the eye, and these data are then transmitted to the rear of the brain for processing. There are millions of retinal photoreceptors of various types, and their signals must be unscrambled by the brain after they are carried through the optic nerves. This work is a forward step toward explaining how the photoreceptor locations and proximities are resolved by the brain. It is illustrated here that unlike in digital image sensors, there is no one-to-one sensor-to-processor identifier in the human vision system. Instead, the brain must go through an iterative learning process to identify the spatial locations of the photosensors in the retina. This involves a process called synaptic pruning, which can be simulated by a memristor-like component in a learning circuit model. The simulations and proposed mathematical models in this study provide a technique that can be extrapolated to create spatial distributions of networked sensors without a central observer or location knowledge base. Through the mapping technique, the retinal space with known configuration generates signals as scrambled data-feed to the logical space in the brain. This scrambled response is then reverse-engineered to map the logical space’s connectivity with the retinal space locations.


2021 ◽  
Author(s):  
Essam M.S.A.E.A. Dabbour

Most of the current collision warning systems are mainly designed to detect imminent rear-end, lane-changing or lane departure collisions. None of them was designed to detect imminent intersection collisions, which were found to cause more fatalities and injuries than other types of collisions. One of the most important factors that lead to intersection collisions is driver’s human error and misjudgement. A main source for human errors is the insensitivity of human vision system to detect the speed and acceleration of approaching vehicles; and therefore, any algorithm for an intersection collision warning system should give consideration to the speed and acceleration of all approaching vehicles to mitigate the inadequacy in the human vision system. Moreover, when designing any collision warning system, false warnings should be minimized to avoid nuisance for drivers that might lead to the loss of the system’s reliability by potential users. This research proposed an intersection collision warning system that utilizes commercially-available detection sensors to detect approaching vehicles and measure their speeds and acceleration rates in order to estimate the time-to-collision and compare it to the time required for the turning vehicle to clear the paths of the approaching vehicles. By comparing these times, the system triggers a warning message if an imminent collision is detected. Minimum specifications for key hardware components are established for the proposed system which does not depend on specific technology. To estimate the time require to clear the paths of the approaching vehicles, statistical models were developed to estimate the perception-reaction time for the driver of the turning vehicle and the rate of acceleration selected when departing the intersection. The statistical models include regression models that were calibrated from data collected through driving simulation and more-sophisticated artificial neural network models that are based on actual data collected from a specific driver on a specific vehicle. The proposed system was validated by computer simulation to verify the accuracy of the developed algorithms and to measure the impact of different components on the functionality and reliability of the system. Final conclusions are provided along with recommendations for further research.


2021 ◽  
Author(s):  
Essam M.S.A.E.A. Dabbour

Most of the current collision warning systems are mainly designed to detect imminent rear-end, lane-changing or lane departure collisions. None of them was designed to detect imminent intersection collisions, which were found to cause more fatalities and injuries than other types of collisions. One of the most important factors that lead to intersection collisions is driver’s human error and misjudgement. A main source for human errors is the insensitivity of human vision system to detect the speed and acceleration of approaching vehicles; and therefore, any algorithm for an intersection collision warning system should give consideration to the speed and acceleration of all approaching vehicles to mitigate the inadequacy in the human vision system. Moreover, when designing any collision warning system, false warnings should be minimized to avoid nuisance for drivers that might lead to the loss of the system’s reliability by potential users. This research proposed an intersection collision warning system that utilizes commercially-available detection sensors to detect approaching vehicles and measure their speeds and acceleration rates in order to estimate the time-to-collision and compare it to the time required for the turning vehicle to clear the paths of the approaching vehicles. By comparing these times, the system triggers a warning message if an imminent collision is detected. Minimum specifications for key hardware components are established for the proposed system which does not depend on specific technology. To estimate the time require to clear the paths of the approaching vehicles, statistical models were developed to estimate the perception-reaction time for the driver of the turning vehicle and the rate of acceleration selected when departing the intersection. The statistical models include regression models that were calibrated from data collected through driving simulation and more-sophisticated artificial neural network models that are based on actual data collected from a specific driver on a specific vehicle. The proposed system was validated by computer simulation to verify the accuracy of the developed algorithms and to measure the impact of different components on the functionality and reliability of the system. Final conclusions are provided along with recommendations for further research.


2021 ◽  
Vol 7 (2) ◽  
pp. 75
Author(s):  
Halim Bayuaji Sumarna ◽  
Ema Utami ◽  
Anggit Dwi Hartanto

Image enhancement merupakan prosedur yang digunakan untuk memproses gambar sehingga dapat memperbaiki atau meningkatkan kualitas gambar agar selanjutnya dapat dianalis untuk tujuan tertentu. Ada banyak algoritma image enhancement yang dapat diterapkan pada suatu gambar, salah satunya dapat menggunakan algoritma structural similarity index measure (SSIM), algoritma ini berfungsi sebagai alat ukur dalam menilai kualitas gambar, bekerja dengan membandingkan fitur structural dari gambar, dan kualitas gambar dijelaskan oleh kesamaan structural. Selain untuk menilai kualitas suatu gambar, SSIM dapat menjadi metode dalam menganalisis perbedaan gambar, sehingga diketahui anomali dari perbandingan dua gambar berdasarkan data structural dari sebuah gambar. Tinjauan literature sistematis ini digunakan untuk menganalisis dan fokus pada algoritma SSIM dalam mengetahui anomaly 2 gambar yang terlihat mirip secara human visual system. Hasil sistematis review menunjukkan bahwa penggunaan algoritma SSIM dalam menilai kualitas gambar berkorelasi kuat dengan HVS (Human Vision System) dan dalam deteksi anomaly gambar menghasilkan akurasi yang berbeda, karena terpengaruh intensitas cahaya dan posisi kamera dalam mengambil gambar sebagai dataset.Kata Kunci— SSIM, anomaly, gambar, deteksiImage enhancement is a procedure used to process images so that they can correct or improve image quality so that they can then be analyzed for specific purposes. Many image enhancement algorithms can be applied to an image. one of the usable methods is the structural similarity index measure (SSIM) algorithm, this algorithm serves as a measuring tool in assessing image quality. It works by comparing the structural features of images, and the image quality is explained by structural similarity. In addition to assessing the quality of an image, SSIM can be a method of analyzing image differences. So, the anomalies are known from the comparison of two images based on the structural data from an image. This systematic literature review is used to analyze and focus on the SSIM algorithm in knowing anomaly 2 images that look similar to the human visual system. Systematic review results show that the use of the SSIM algorithm in assessing image quality is strongly correlated with HVS (Human Vision System). In anomaly detection of images produces different accuracy because it is affected by light intensity and camera position in taking pictures as a dataset.Keywords— SSIM, anomaly, gambar, deteksi


Author(s):  
Marinella Cadoni ◽  
Andrea Lagorio ◽  
Souad Khellat-Kihel ◽  
Enrico Grosso

AbstractTraditional local image descriptors such as SIFT and SURF are based on processings similar to those that take place in the early visual cortex. Nowadays, convolutional neural networks still draw inspiration from the human vision system, integrating computational elements typical of higher visual cortical areas. Deep CNN’s architectures are intrinsically hard to interpret, so much effort has been made to dissect them in order to understand which type of features they learn. However, considering the resemblance to the human vision system, no enough attention has been devoted to understand if the image features learned by deep CNNs and used for classification correlate with features that humans select when viewing images, the so-called human fixations, nor if they correlate with earlier developed handcrafted features such as SIFT and SURF. Exploring these correlations is highly meaningful since what we require from CNNs, and features in general, is to recognize and correctly classify objects or subjects relevant to humans. In this paper, we establish the correlation between three families of image interest points: human fixations, handcrafted and CNN features. We extract features from the feature maps of selected layers of several deep CNN’s architectures, from the shallowest to the deepest. All features and fixations are then compared with two types of measures, global and local, which unveil the degree of similarity of the areas of interest of the three families. From the experiments carried out on ETD human fixations database, it turns out that human fixations are positively correlated with handcrafted features and even more with deep layers of CNNs and that handcrafted features highly correlate between themselves as some CNNs do.


2021 ◽  
Author(s):  
Zhilong Xin ◽  
Yang Tan ◽  
Tong Chen ◽  
Emad Iranmanesh ◽  
Lei Li ◽  
...  

The detected wavelength of perovskite quantum dots embedded in IGZO TFT can be tuned by replacing the quantum dot halogen ions. It is expected that a color-distinguishable artificial human vision system can be developed.


Author(s):  
Ruchir Shah ◽  
Dhaval Tamboli ◽  
Ajay Makwana ◽  
Ravindra Baria ◽  
Kishori Shekokar ◽  
...  

In this survey paper, we have discussed a proposed system that can be a visionary eye for a blind person. A common goal in computer vision research is to build machines that can replicate the human vision system. For example, to recognize and describe objects/scenes. People who are blind to overcome their real daily visual challenges. To develop a machine that can work by the vocal and graphical assistive answer. A machine can work on voice assistant and take the image taken by a person and after an image processing and extract the result after neural networks.


2020 ◽  
Vol 10 (21) ◽  
pp. 7582
Author(s):  
Dariusz Frejlichowski

For many decades researchers have been trying to make computer analysis of images as effective as the human vision system is [...]


Sign in / Sign up

Export Citation Format

Share Document