Coordinated observation system for extreme weathers consisting of AWS network with lightning sensor and micro-satellites

Author(s):  
Yukihiro Takahashi ◽  
Mitsuteru Sato ◽  
Hisayuki Kubota ◽  
Tetsuro Ishida ◽  
Ellison Castro ◽  
...  

<p>In order to predict the intensity and location of extreme weathers, such as torrential rainfall by individual thunderstorm or typhoon, we are developing the new methodology of weather monitoring using a ground AWS network with lightning sensors and micro-satellites weighting about 50kg, which will realize quasi-real-time thunderstorm monitoring with broad coverage. Based on the AWS network data, we plan to operate micro-satellites in nearly real-time, manipulating the attitude of satellite for capturing the most dangerous or important cloud images for 3D reconstruction. We have developed and launched several micro-satellites and been improving the target pointing operation for this decade. We succeeded in obtaining the images of the typhoon center at a resolution of 60-100 m for Typhoon Trami in 2018 and Typhoon Maysak in 2020. Using 4 or a few 10s images captured from different angles by one micro-satellite when it passed over the typhoon area, 3D models of typhoon eye were reconstructed, which have a ground resolution of ~100 m. Due to the unusual temperature profile around typhoon eye, it’s very difficult to estimate the heigh distribution of cloud top only with a thermal infrared image at a resolution of 2 km taken by geostationary meteorological satellite. This is one of the biggest limitations in estimating the precise intensity of typhoons, namely, the center pressure or the maximum wind velocity. The on-demand flexible operation of micro-satellite will achieve the high accuracy estimation of typhoon intensity as well as the speed estimation of individual thunderstorm development, which can be applied to disaster management. This research was conducted by a mixed team of Japan and the Philippines, supported by Science and Technology Research Partnership for Sustainable Development (SATREPS), which is funded by Japan Science and Technology Agency (JST) / Japan International Cooperation Agency (JICA).</p>

2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2021 ◽  
Author(s):  
Nassima Brown ◽  
Adrian Brown ◽  
Abhijeet Degupta ◽  
Barry Quinn ◽  
Dustin Stringer ◽  
...  

Abstract As the oil and gas industry is facing tumultuous challenges, adoption of cutting-edge digital technologies has been accelerated to deliver safer, more efficient operations with less impact on the environment. While advanced AI and other digital technologies have been rapidly evolving in many fields in the industry, the HSE sector is playing catch-up. With the increasing complexity of risks and safety management processes, the effective application of data-driven technologies has become significantly harder, particularly for international organizations with varying levels of digital readiness across diverse global operations. Leaders are more cautious to implement solutions that are not fit-for purpose, due to concerns over inconsistencies in rolling out the program across international markets and the impact this may have on ongoing operations. This paper describes how the effective application of Artificial intelligence (AI) and Machine Learning (ML) technologies have been used to engineer a solution that fully digitizes and automates the end-to-end offshore behavior-based safety program across a global offshore fleet; optimizing a critical safety process used by many leading oil & gas organization to drive positive workplace safety culture. The complex safety program has been transformed into clear, efficient and automated workflow, with real-time analytics and live transparent dashboards which detail critical safety indicators in real time, aiding decision-making and improving operational performance. The novel behavior-based safety digital solution, referred to as 3C observation tool within Noble drilling, has been built to be fully aligned with the organization's safety management system requirements and procedures, using modern and agile tools and applications for fully scalability and easy deployment. It has been critical in sharpening the offshore safety observation program across global operations, resulting in a boost of the workforce engagement by 30%, and subsequently increasing safety awareness skill set attainment; improving overall offshore safety culture, all while reducing operating costs by up to 70% and cutting carbon footprint through the elimination of 15,000 manhours and half a million paper cards each year, when compared to previously used methods and workflows


2019 ◽  
Vol 7 (2) ◽  
pp. 71
Author(s):  
Maggie Liu

Aquatic Science and Technology (AST) would like to acknowledge the following reviewers for their assistance with peer review of manuscripts for this issue. Many authors, regardless of whether AST publishes their work, appreciate the helpful feedback provided by the reviewers. Their comments and suggestions were of great help to the authors in improving the quality of their papers. Each of the reviewers listed below returned at least one review for this issue.Reviewers for Volume 7, Number 2 Augusto E. Serrano, University of the Philippines Visayas, PhilippinesAyman El-Gamal, Coastal Research Institute, EgyptDavid Kerstetter, Nova Southeastern University Oceanographic Center, USALevent BAT, Sinop University Fisheries Faculty, TurkeyLuciana Mastrantuono, Department of Environmental Biology, ItalyTai-Sheng Cheng, National University of Taiwan, TaiwanMaggie LiuAquatic Science and TechnologyMacrothink Institute*************************************5348 Vegas Dr.#825Las Vegas, Nevada 89108United StatesTel: 1-702-953-1852 ext. 524Fax: 1-702-420-2900E-mail: [email protected]: http://ast.macrothink.org


2018 ◽  
Vol 10 (10) ◽  
pp. 1544 ◽  
Author(s):  
Changjiang Liu ◽  
Irene Cheng ◽  
Anup Basu

We present a new method for real-time runway detection embedded in synthetic vision and an ROI (Region of Interest) based level set method. A virtual runway from synthetic vision provides a rough region of an infrared runway. A three-thresholding segmentation is proposed following Otsu’s binarization method to extract a runway subset from this region, which is used to construct an initial level set function. The virtual runway also gives a reference area of the actual runway in an infrared image, which helps us design a stopping criterion for the level set method. In order to meet the needs of real-time processing, the ROI based level set evolution framework is implemented in this paper. Experimental results show that the proposed algorithm is efficient and accurate.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhaoli Wu ◽  
Xin Wang ◽  
Chao Chen

Due to the limitation of energy consumption and power consumption, the embedded platform cannot meet the real-time requirements of the far-infrared image pedestrian detection algorithm. To solve this problem, this paper proposes a new real-time infrared pedestrian detection algorithm (RepVGG-YOLOv4, Rep-YOLO), which uses RepVGG to reconstruct the YOLOv4 backbone network, reduces the amount of model parameters and calculations, and improves the speed of target detection; using space spatial pyramid pooling (SPP) obtains different receptive field information to improve the accuracy of model detection; using the channel pruning compression method reduces redundant parameters, model size, and computational complexity. The experimental results show that compared with the YOLOv4 target detection algorithm, the Rep-YOLO algorithm reduces the model volume by 90%, the floating-point calculation is reduced by 93.4%, the reasoning speed is increased by 4 times, and the model detection accuracy after compression reaches 93.25%.


Biomedicines ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1741
Author(s):  
Alena Kashirina ◽  
Alena Gavrina ◽  
Emil Kryukov ◽  
Vadim Elagin ◽  
Yuliya Kolesova ◽  
...  

Brain diseases including Down syndrome (DS/TS21) are known to be characterized by changes in cellular metabolism. To adequately assess such metabolic changes during pathological processes and to test drugs, methods are needed that allow monitoring of these changes in real time with minimally invasive effects. Thus, the aim of our work was to study the metabolic status and intracellular pH of spheroids carrying DS using fluorescence microscopy and FLIM. For metabolic analysis we measured the fluorescence intensities, fluorescence lifetimes and the contributions of the free and bound forms of NAD(P)H. For intracellular pH assay we measured the fluorescence intensities of SypHer-2 and BCECF. Data were processed with SPCImage and Fiji-ImageJ. We demonstrated the predominance of glycolysis in TS21 spheroids compared with normal karyotype (NK) spheroids. Assessment of the intracellular pH indicated a more alkaline intracellular pH in the TS21 spheroids compared to NK spheroids. Using fluorescence imaging, we performed a comprehensive comparative analysis of the metabolism and intracellular pH of TS21 spheroids and showed that fluorescence microscopy and FLIM make it possible to study living cells in 3D models in real time with minimally invasive effects.


Author(s):  
J. Zhu ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> In this work, we discussed how to directly combine thermal infrared image (TIR) and the point cloud without additional assistance from GCPs or 3D models. Specifically, we propose a point-based co-registration process for combining the TIR image and the point cloud for the buildings. The keypoints are extracted from images and point clouds via primitive segmentation and corner detection, then pairs of corresponding points are identified manually. After that, the estimated camera pose can be computed with EPnP algorithm. Finally, the point cloud with thermal information provided by IR images can be generated as a result, which is helpful in the tasks such as energy inspection, leakage detection, and abnormal condition monitoring. This paper provides us more insight about the probability and ideas about the combining TIR image and point cloud.</p>


Sign in / Sign up

Export Citation Format

Share Document