shadow area
Recently Published Documents


TOTAL DOCUMENTS

85
(FIVE YEARS 27)

H-INDEX

5
(FIVE YEARS 1)

2022 ◽  
Vol 12 (2) ◽  
pp. 824
Author(s):  
Kamran Javed ◽  
Nizam Ud Din ◽  
Ghulam Hussain ◽  
Tahir Farooq

Face photographs taken on a bright sunny day or in floodlight contain unnecessary shadows of objects on the face. Most previous works deal with removing shadow from scene images and struggle with doing so for facial images. Faces have a complex semantic structure, due to which shadow removal is challenging. The aim of this research is to remove the shadow of an object in facial images. We propose a novel generative adversarial network (GAN) based image-to-image translation approach for shadow removal in face images. The first stage of our model automatically produces a binary segmentation mask for the shadow region. Then, the second stage, which is a GAN-based network, removes the object shadow and synthesizes the effected region. The generator network of our GAN has two parallel encoders—one is standard convolution path and the other is a partial convolution. We find that this combination in the generator results not only in learning an incorporated semantic structure but also in disentangling visual discrepancies problems under the shadow area. In addition to GAN loss, we exploit low level L1, structural level SSIM and perceptual loss from a pre-trained loss network for better texture and perceptual quality, respectively. Since there is no paired dataset for the shadow removal problem, we created a synthetic shadow dataset for training our network in a supervised manner. The proposed approach effectively removes shadows from real and synthetic test samples, while retaining complex facial semantics. Experimental evaluations consistently show the advantages of the proposed method over several representative state-of-the-art approaches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tomonori Endo ◽  
Aki Gemma ◽  
Ryoto Mitsuyoshi ◽  
Hiroki Kodama ◽  
Daiya Asaka ◽  
...  

AbstractResearch has previously shown that ultraviolet light C (UV-C) can inactivate unexpected infection. However, this type of potential disinfection is dramatically reduced for the shadow area such as under desk or medical equipment. Because the UV-C reflectance ratio is low on the general wall surfaces. We compared Stucco against the other materials to investigate whether we could improve disinfection for the shadow area. The reflectance ratios of UV-C irradiation of each material were examined, with particular attention to the rates for the author’s Modified Stucco. To evaluate the disinfection effects of the UV-C reflective lighting, colonies of E. coli and of Staphylococcus hominis were cultured in an agar media and counted over a certain time period after applying UV-C irradiation from a sterilizing lamp onto the investigation materials. The author’s Modified Stucco, produced reflectance ratios that was 11 times that of white wallpaper. This demonstrated that the UV-C reflected on the Stucco wall having optimum components and their compositions inhibited the number of E. coli and S. hominis, resulting in significantly disinfection effects on white wallpapers. The space with Modified Stucco and then irradiated by a UV-C may give a strong disinfection effect.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6810
Author(s):  
Donggeun Oh ◽  
Junghee Han

UAVs (Unmanned Aerial Vehicles) have been developed and adopted for various fields including military, IT, agriculture, construction, and so on. In particular, UAVs are being heavily used in the field of disaster relief thanks to the fact that UAVs are becoming smaller and more intelligent. Search for a person in a disaster site can be difficult if the mobile communication network is not available, and if the person is in the GPS shadow area. Recently, the search for survivors using unmanned aerial vehicles has been studied, but there are several problems as the search is mainly using images taken with cameras (including thermal imaging cameras). For example, it is difficult to distinguish a distressed person from a long distance especially in the presence of cover. Considering these challenges, we proposed an autonomous UAV smart search system that can complete their missions without interference in search and tracking of castaways even in disaster areas where communication with base stations is likely to be lost. To achieve this goal, we first make UAVs perform autonomous flight with locating and approaching the distressed people without the help of the ground control server (GCS). Second, to locate a survivor accurately, we developed a genetic-based localization algorithm by detecting changes in the signal strength between distress and drones inside the search system. Specifically, we modeled our target platform with a genetic algorithm and we re-defined the genetic algorithm customized to the disaster site’s environment for tracking accuracy. Finally, we verified the proposed search system in several real-world sites and found that it successfully located targets with autonomous flight.


2021 ◽  
pp. bjophthalmol-2020-318646
Author(s):  
Honglian Xiong ◽  
Qi Sheng You ◽  
Yukun Guo ◽  
Jie Wang ◽  
Bingjie Wang ◽  
...  

SynopsisA deep-learning-based macular extrafoveal avascular area (EAA) on a 6×6 mm optical coherence tomography (OCT) angiogram is less dependent on the signal strength and shadow artefacts, providing better diagnostic accuracy for diabetic retinopathy (DR) severity than the commercial software measured extrafoveal vessel density (EVD).AimsTo compare a deep-learning-based EAA to commercial output EVD in the diagnostic accuracy of determining DR severity levels from 6×6 mm OCT angiography (OCTA) scans.MethodsThe 6×6 mm macular OCTA scans were acquired on one eye of each participant with a spectral-domain OCTA system. After excluding the central 1 mm diameter circle, the EAA on superficial vascular complex was measured with a deep-learning-based algorithm, and the EVD was obtained with commercial software.ResultsThe study included 34 healthy controls and 118 diabetic patients. EAA and EVD were highly correlated with DR severity (ρ=0.812 and −0.577, respectively, both p<0.001) and visual acuity (r=−0.357 and 0.420, respectively, both p<0.001). EAA had a significantly (p<0.001) higher correlation with DR severity than EVD. With the specificity at 95%, the sensitivities of EAA for differentiating diabetes mellitus (DM), DR and severe DR from control were 80.5%, 92.0% and 100.0%, respectively, significantly higher than those of EVD 11.9% (p=0.001), 13.6% (p<0.001) and 15.8% (p<0.001), respectively. EVD was significantly correlated with signal strength index (SSI) (r=0.607, p<0.001) and shadow area (r=−0.530, p<0.001), but EAA was not (r=−0.044, p=0.805 and r=−0.046, p=0.796, respectively). Adjustment of EVD with SSI and shadow area lowered sensitivities for detection of DM, DR and severe DR.ConclusionMacular EAA on 6×6 mm OCTA measured with a deep learning-based algorithm is less dependent on the signal strength and shadow artefacts, and provides better diagnostic accuracy for DR severity than EVD measured with the instrument-embedded software.


2021 ◽  
Vol 13 (10) ◽  
pp. 5392
Author(s):  
José Ángel Aranda ◽  
María Moncho Santonja ◽  
MÁ Gil Saurí ◽  
Guillermo Peris-Fajarnés

The lack of sunlight on mountain roads in the wintertime leads to an increase in accidents. In this paper, a methodology is presented for the use of sunny and shady areas to be included as a parameter in road design. The inclusion of this parameter allows for the design of an iterative method for the projected infrastructures. The parameterization of the road layout facilitates the possibility of applying an iterative process of modifying the geometric elements that constitute it, examining different layout alternatives until a layout is achieved in which the surface area in the shady area is minimized, increasing the road safety and minimizing environmental impact. The methodology has been defined, generating and analyzing the results of the solar lighting study using a file in IFC format capable of integrating with the rest of the design elements (platform, signaling, structures, etc.) and thus obtaining a BIM format which allows the model to be viewed in three dimensions and moves towards 4D and 5D. The model used for the study was a high mountain road located in the province of Teruel (Spain). It is a road section characterized by successive curves in which several traffic accidents have occurred due to running off the road, partly because of the presence of ice on the platform.


Author(s):  
Malabika Tripathi

Appreciative inquiry is an ‘asset-based' approach that focuses on the positive things of life. Through its 4D cycle and principles it generates transformation of organization. AI when practiced goes through 4D cycle of discovery, dream, design, and destiny. An individual gets the scope to rediscover and reorganize when passing through these four phases. This liberates the mind through exploration and introspection. Interventions of AI even clarify the ‘shadow' area of the human mind. The chapter tries to establish AI as a potential tool that can be used at individual level to aid in up gradation of mental health awareness through reviewing existing literature in this field.


2021 ◽  
Vol 37 (4) ◽  
pp. 751-761
Author(s):  
Francisco Rojo ◽  
Rajveer Dhillon ◽  
Shrinivasa K. Upadhyaya ◽  
Hunjun Liu ◽  
Jedediah Roach

Highlights Measurement of canopy light interception data using a ground-based mobile system. Using UAV-captured aerial images and zenith angle to estimate canopy light interception at different times of the day. Identifying boundaries of individual trees using the maximum likelihood estimator and the watershed algorithm. Abstract. Photosynthetically Active Radiation (PAR) absorbed by the leaves is a key piece of information to study the crop response to environmental conditions that could be used to estimate crop production potential. Canopies in a commercial orchard present differences in their capability to intercept light mainly due to the spatial variability in canopy development. There is a need for developing tools that could capture spatial variability in PAR interception to predict potential yield. Unmanned Aerial Vehicles (UAV) present an interesting alternative to provide this information, as they cover a larger area than ground-based systems in a shorter period with high spatial resolution. The objective of this study was to determine the relationship between the shadow of a tree derived from a ground-based canopy light interception scan obtained using a lightbar mounted on a mobile platform and that acquired from UAV Red-Green-Blue (RGB) images. Information acquired by an UAV was classified to separate canopy from its shadow, grass and sunlit soil using maximum likelihood estimator. Boundaries of individual trees were identified based on their positions using watershed transform algorithm. The relationship between canopy PAR interception data, sun angle in the sky (zenith angle), and the information derived from aerial images was analyzed. Coefficient of determination (R2) values of 0.92 and 0.88 were found for the multiple linear regression between PAR, the shadow area and the cosine of zenith angle obtained for almond and walnut crops, respectively. Moreover, R2 values of 0.81 and 0.86 were found for the relationship between the shadow’s area obtained underneath the canopy and the shadow’s area obtained from the UAV images and the cosine of the zenith angle for almond and walnut crops, respectively. The results show that the PAR interception can be estimated using the zenith angle and the area of the shadow, which can be obtained from a RGB aerial image. Keywords: Almond, Canopy segmentation, Image classification, PAR interception, Shadow area, UAV, Walnut.


2020 ◽  
Vol 65 (4) ◽  
Author(s):  
Sunandini G.P.

The present study was carried out in the erstwhile Mahabubnagar district of Telangana, the major producer of pigeon pea among pulses, which lies in the rain shadow area of the state and suffers from frequent droughts. The rainfall in the district is fluctuating. Further revealed that about 70 percent of farmers expressed that change in climatic conditions has reduced pigeon pea yield and 28 percent farmers opined that it reduced output and the quality seed, 87 percent are interested in an alternate crop as a coping up mechanism in complete failure of crop. The significant change in climate as per farmer’s perception is the erratic distribution of rainfall with the highest Garrett Score of 76.66, followed by a delay in monsoon,which scored 69.54. The suggestions of the farmers when ranked to face the extreme climate conditions given in the order are,the dissemination of knowledge on contingent crops,loans for second crop and waiver of earlier loans, early settlement of crop insurance, High yielding early duration pigeon pea varieties to escape terminal moisture stress, sufficient quantities of quality seed on subsidy for the second crop in case of failure of crop in initial stages.


2020 ◽  
Vol 43 (1) ◽  
pp. 29-45
Author(s):  
Alex Noel Joseph Raj ◽  
Ruban Nersisson ◽  
Vijayalakshmi G. V. Mahesh ◽  
Zhemin Zhuang

Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA’s. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.


Sign in / Sign up

Export Citation Format

Share Document