scholarly journals Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases

2020 ◽  
Vol 12 (6) ◽  
pp. 943
Author(s):  
Andreas Schmitt ◽  
Anna Wendleder ◽  
Rüdiger Kleynmans ◽  
Maximilian Hell ◽  
Achim Roth ◽  
...  

This article spanned a new, consistent framework for production, archiving, and provision of analysis ready data (ARD) from multi-source and multi-temporal satellite acquisitions and an subsequent image fusion. The core of the image fusion was an orthogonal transform of the reflectance channels from optical sensors on hypercomplex bases delivered in Kennaugh-like elements, which are well-known from polarimetric radar. In this way, SAR and Optics could be fused to one image data set sharing the characteristics of both: the sharpness of Optics and the texture of SAR. The special properties of Kennaugh elements regarding their scaling—linear, logarithmic, normalized—applied likewise to the new elements and guaranteed their robustness towards noise, radiometric sub-sampling, and therewith data compression. This study combined Sentinel-1 and Sentinel-2 on an Octonion basis as well as Sentinel-2 and ALOS-PALSAR-2 on a Sedenion basis. The validation using signatures of typical land cover classes showed that the efficient archiving in 4 bit images still guaranteed an accuracy over 90% in the class assignment. Due to the stability of the resulting class signatures, the fuzziness to be caught by Machine Learning Algorithms was minimized at the same time. Thus, this methodology was predestined to act as new standard for ARD remote sensing data with an subsequent image fusion processed in so-called data cubes.

Drones ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 21 ◽  
Author(s):  
Francisco Rodríguez-Puerta ◽  
Rafael Alonso Ponce ◽  
Fernando Pérez-Rodríguez ◽  
Beatriz Águeda ◽  
Saray Martín-García ◽  
...  

Controlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing fuel load, we in turn reduce the intensity and severity of the fire. The use of Unmanned Aerial Vehicles (UAV)-acquired data combined with other passive and active remote sensing data has the greatest performance to planning Wildland-Urban Interface (WUI) fuelbreak through machine learning algorithms. Nine remote sensing data sources (active and passive) and four supervised classification algorithms (Random Forest, Linear and Radial Support Vector Machine and Artificial Neural Networks) were tested to classify five fuel-area types. We used very high-density Light Detection and Ranging (LiDAR) data acquired by UAV (154 returns·m−2 and ortho-mosaic of 5-cm pixel), multispectral data from the satellites Pleiades-1B and Sentinel-2, and low-density LiDAR data acquired by Airborne Laser Scanning (ALS) (0.5 returns·m−2, ortho-mosaic of 25 cm pixels). Through the Variable Selection Using Random Forest (VSURF) procedure, a pre-selection of final variables was carried out to train the model. The four algorithms were compared, and it was concluded that the differences among them in overall accuracy (OA) on training datasets were negligible. Although the highest accuracy in the training step was obtained in SVML (OA=94.46%) and in testing in ANN (OA=91.91%), Random Forest was considered to be the most reliable algorithm, since it produced more consistent predictions due to the smaller differences between training and testing performance. Using a combination of Sentinel-2 and the two LiDAR data (UAV and ALS), Random Forest obtained an OA of 90.66% in training and of 91.80% in testing datasets. The differences in accuracy between the data sources used are much greater than between algorithms. LiDAR growth metrics calculated using point clouds in different dates and multispectral information from different seasons of the year are the most important variables in the classification. Our results support the essential role of UAVs in fuelbreak planning and management and thus, in the prevention of forest fires.


2020 ◽  
Vol 12 (23) ◽  
pp. 3933
Author(s):  
Anggun Tridawati ◽  
Ketut Wikantika ◽  
Tri Muji Susantoro ◽  
Agung Budi Harto ◽  
Soni Darmawan ◽  
...  

Indonesia is the world’s fourth largest coffee producer. Coffee plantations cover 1.2 million ha of the country with a production of 500 kg/ha. However, information regarding the distribution of coffee plantations in Indonesia is limited. This study aimed to assess the accuracy of classification model and determine its important variables for mapping coffee plantations. The model obtained 29 variables which derived from the integration of multi-resolution, multi-temporal, and multi-sensor remote sensing data, namely, pan-sharpened GeoEye-1, multi-temporal Sentinel 2, and DEMNAS. Applying a random forest algorithm (tree = 1000, mtry = all variables, minimum node size: 6), this model achieved overall accuracy, kappa statistics, producer accuracy, and user accuracy of 79.333%, 0.774, 92.000%, and 90.790%, respectively. In addition, 12 most important variables achieved overall accuracy, kappa statistics, producer accuracy, and user accuracy 79.333%, 0.774, 91.333%, and 84.570%, respectively. Our results indicate that random forest algorithm is efficient in mapping coffee plantations in an agroforestry system.


PLoS ONE ◽  
2016 ◽  
Vol 11 (12) ◽  
pp. e0165016 ◽  
Author(s):  
Alexander Toet ◽  
Maarten A. Hogervorst ◽  
Alan R. Pinkus

Afrika Focus ◽  
1991 ◽  
Vol 7 (1) ◽  
Author(s):  
Beata Maria De Vliegher

The mapping of the land use in a tropical wet and dry area (East-Mono, Central Togo) is made using remote sensing data, recorded by the satellite SPOT. The negative, multispectral image data set has been transferred into positives by photographical means and afterwards enhanced using the diazo technique. The combination of the different diazo coloured images resulted in a false colour composite, being the basic document for the visual image interpretation. The image analysis, based upon differences in colour and texture, resulted in a photomorphic unit map. The use of a decision tree including the various image characteristics allowed the conversion of the photomorphic unit map into a land cover map. For this, six main land cover types could be differentiated resulting in 16 different classes of the final map. KEY WORDS :Remote sensing, SPOT, Multispectral view, Visual image interpre- tation, Mapping, Vegetation, Land use, Togo. 


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2401 ◽  
Author(s):  
Chuanliang Sun ◽  
Yan Bian ◽  
Tao Zhou ◽  
Jianjun Pan

Crop-type identification is very important in agricultural regions. Most researchers in this area have focused on exploring the ability of synthetic-aperture radar (SAR) sensors to identify crops. This paper uses multi-source (Sentinel-1, Sentinel-2, and Landsat-8) and multi-temporal data to identify crop types. The change detection method was used to analyze spectral and indices information in time series. Significant differences in crop growth status during the growing season were found. Then, three obviously differentiated time features were extracted. Three advanced machine learning algorithms (Support Vector Machine, Artificial Neural Network, and Random Forest, RF) were used to identify the crop types. The results showed that the detection of (Vertical-vertical) VV, (Vertical-horizontal) VH, and Cross Ratio (CR) changes was effective for identifying land cover. Moreover, the red-edge changes were obviously different according to crop growth periods. Sentinel-2 and Landsat-8 showed different normalized difference vegetation index (NDVI) changes also. By using single remote sensing data to classify crops, Sentinel-2 produced the highest overall accuracy (0.91) and Kappa coefficient (0.89). The combination of Sentinel-1, Sentinel-2, and Landsat-8 data provided the best overall accuracy (0.93) and Kappa coefficient (0.91). The RF method had the best performance in terms of identity classification. In addition, the indices feature dominated the classification results. The combination of phenological period information with multi-source remote sensing data can be used to explore a crop area and its status in the growing season. The results of crop classification can be used to analyze the density and distribution of crops. This study can also allow to determine crop growth status, improve crop yield estimation accuracy, and provide a basis for crop management.


2020 ◽  
Vol 12 (2) ◽  
pp. 299 ◽  
Author(s):  
Yanan Du ◽  
Guangcai Feng ◽  
Lin Liu ◽  
Haiqiang Fu ◽  
Xing Peng ◽  
...  

Coastal areas are usually densely populated, economically developed, ecologically dense, and subject to a phenomenon that is becoming increasingly serious, land subsidence. Land subsidence can accelerate the increase in relative sea level, lead to a series of potential hazards, and threaten the stability of the ecological environment and human lives. In this paper, we adopted two commonly used multi-temporal interferometric synthetic aperture radar (MTInSAR) techniques, Small baseline subset (SBAS) and Temporarily coherent point (TCP) InSAR, to monitor the land subsidence along the entire coastline of Guangdong Province. The long-wavelength L-band ALOS/PALSAR-1 dataset collected from 2007 to 2011 is used to generate the average deformation velocity and deformation time series. Linear subsidence rates over 150 mm/yr are observed in the Chaoshan Plain. The spatiotemporal characteristics are analyzed and then compared with land use and geology to infer potential causes of the land subsidence. The results show that (1) subsidence with notable rates (>20 mm/yr) mainly occurs in areas of aquaculture, followed by urban, agricultural, and forest areas, with percentages of 40.8%, 37.1%, 21.5%, and 0.6%, respectively; (2) subsidence is mainly concentrated in the compressible Holocene deposits, and clearly associated with the thickness of the deposits; and (3) groundwater exploitation for aquaculture and agricultural use outside city areas is probably the main cause of subsidence along these coastal areas.


Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 211
Author(s):  
Xingzhu Wang ◽  
Jiyu Wei ◽  
Yang Liu ◽  
Jinhao Li ◽  
Zhen Zhang ◽  
...  

Recently, astronomy has witnessed great advancements in detectors and telescopes. Imaging data collected by these instruments are organized into very large datasets that form data-oriented astronomy. The imaging data contain many radio galaxies (RGs) that are interesting to astronomers. However, considering that the scale of astronomical databases in the information age is extremely large, a manual search of these galaxies is impractical given the need for manual labor. Therefore, the ability to detect specific types of galaxies largely depends on computer algorithms. Applying machine learning algorithms on large astronomical data sets can more effectively detect galaxies using photometric images. Astronomers are motivated to develop tools that can automatically analyze massive imaging data, including developing an automatic morphological detection of specified radio sources. Galaxy Zoo projects have generated great interest in visually classifying galaxy samples using CNNs. Banfield studied radio morphologies and host galaxies derived from visual inspection in the Radio Galaxy Zoo project. However, there are relatively more studies on galaxy classification, while there are fewer studies on galaxy detection. We develop a galaxy detection model, which realizes the location and classification of Fanaroff–Riley class I (FR I) and Fanaroff–Riley class II (FR II) galaxies. The field of target detection has also developed rapidly since the convolutional neural network was proposed. You Only Look Once: Unified, Real-Time Object Detection (YOLO) is a neural-network-based target detection model proposed by Redmon et al. We made several improvements to the detection effect of dense galaxies based on the original YOLOv5, mainly including the following. (1) We use Varifocal loss, whose function is to weigh positive and negative samples asymmetrically and highlight the main sample of positive samples in the training phase. (2) Our neural network model adds an attention mechanism for the convolution kernel so that the feature extraction network can adjust the size of the receptive field dynamically in deep convolutional neural networks. In this way, our model has good adaptability and effect for identifying galaxies of different sizes on the picture. (3) We use empirical practices suitable for small target detection, such as image segmentation and reducing the stride of the convolutional layers. Apart from the three major contributions and novel points of the model, the thesis also included different data sources, i.e., radio images and optical images, aiming at better classification performance and more accurate positioning. We used optical image data from SDSS, radio image data from FIRST, and label data from FR Is and FR IIs catalogs to create a data set of FR Is and FR IIs. Subsequently, we used the data set to train our improved YOLOv5 model and finally realize the automatic classification and detection of FR Is and FR IIs. Experimental results prove that our improved method achieves better performance. [email protected] of our model reaches 82.3%, and the location (Ra and Dec) of the galaxies can be identified more accurately. Our model has great astronomical significance. For example, it can help astronomers find FR I and FR II galaxies to build a larger-scale galaxy catalog. Our detection method can also be extended to other types of RGs. Thus, astronomers can locate the specific type of galaxies in a considerably shorter time and with minimum human intervention, or it can be combined with other observation data (spectrum and redshift) to explore other properties of the galaxies.


Author(s):  
R. A. Parekh ◽  
R. L. Mehta ◽  
A. Vyas

Radar sensors can be used for large-scale vegetation mapping and monitoring using backscatter coefficients in different polarisations and wavelength bands. Due to cloud and haze interference, optical images are not always available at all phonological stages important for crop discrimination. Moreover, in cloud prone areas, exclusively SAR approach would provide operational solution. This paper presents the results of classifying the cropped and non cropped areas using multi-temporal SAR images. Dual polarised C- band RISAT MRS (Medium Resolution ScanSAR mode) data were acquired on 9<sup>th</sup>Dec. 2012, 28<sup>th</sup>Jan. 2013 and 22<sup>nd</sup> Feb. 2013 at 18m spatial resolution. Intensity images of two polarisations (HH, HV) were extracted and converted into backscattering coefficient images. Cross polarisation ratio (CPR) images and Radar fractional vegetation density index (RFDI) were created from the temporal data and integrated with the multi-temporal images. Signatures of cropped and un-cropped areas were used for maximum likelihood supervised classification. Separability in cropped and umcropped classes using different polarisation combinations and classification accuracy analysis was carried out. FCC (False Color Composite) prepared using best three SAR polarisations in the data set was compared with LISS-III (Linear Imaging Self-Scanning System-III) image. The acreage under rabi crops was estimated. The methodology developed was for rabi cropped area, due to availability of SAR data of rabi season. Though, the approach is more relevant for acreage estimation of kharif crops when frequent cloud cover condition prevails during monsoon season and optical sensors fail to deliver good quality images.


Afrika Focus ◽  
1991 ◽  
Vol 7 (1) ◽  
pp. 15-48
Author(s):  
Beata Maria De Vliegher

The mapping of the land use in a tropical wet and dry area (East-Mono, Central Togo) is made using remote sensing data, recorded by the satellite SPOT. The negative, multispectral image data set has been transferred into positives by photographical means and afterwards enhanced using the diazo technique. The combination of the different diazo coloured images resulted in a false colour composite, being the basic document for the visual image interpretation. The image analysis, based upon differences in colour and texture, resulted in a photomorphic unit map. The use of a decision tree including the various image characteristics allowed the conversion of the photomorphic unit map into a land cover map. For this, six main land cover types could be differentiated resulting in 16 different classes of the final map.


2021 ◽  
Author(s):  
Thomas Linsenmann ◽  
Andrea Cattaneo ◽  
Alexander März ◽  
Judith Weiland ◽  
Christian Stetter ◽  
...  

Abstract Purpose: Mobile 3-dimensional fluoroscopes are available in a number of neurosurgical departments and can be used in combination with simple image post processing to depict cerebral vessels. In preparation of stereotactic surgery, preoperative Computed Tomography (CT) may be required for image fusion. Contrast CT may be of further advantage for image fusion as it regards the vessel anatomy in trajectory planning. Time-consuming in-hospital transports are necessary for this purpose. Mobile 3D-fluoroscopes may be used to generate a CT equal preoperative data set without an in-hospital transport. This study was performed to determine the feasibility and image quality of intraoperative 3-dimensional fluoroscopy with intravenous contrast administration.Methods: 6 patients were included in this feasibility study. Their heads were fixed in a radiolucent Mayfield clamp. A rotational fluoroscopy scan was performed with 50 mL iodine contrast agent. The image data sets were merged with the existing MRI images at a planning station and visually evaluated by two observer. The operation times were compared between the frame-based and frameless systems (“skin-to-skin” and “OR entry to exit”)Results: No adverse effects were observed. The entire procedure from fluoroscope positioning to the transfer to the planning station took 5 to 6 minutes with an image acquisition time of 24 seconds. In 5 of 6 cases, the fused imaging was able to reproduce the vascular anatomy accurately and in good quality. Both time end-points were significantly shorter compared to frame-based interventions.Conclusion: The images could easily be transferred to the planning and navigation system and were successfully merged with the MRI data set. The procedure can be completely integrated into the surgical workflow. Preoperative CT imaging or transport under anaesthesia may even be replaced by this technique in the future. Furthermore, hemorrhages can be successfully visualized intraoperatively and might prevent time delays in emergencies.


Sign in / Sign up

Export Citation Format

Share Document