The ESA Sentinel Next-Generation Land & Ocean Optical Imaging Architectural Study, an Overview

Author(s):  
Armin Löscher ◽  
Philippe Martimort ◽  
Simon Jutz ◽  
Ferran Gascon ◽  
Craig Donlon ◽  
...  

<p>ESA initiated in 2018 an architectural design study to prepare the development of the next generation of the optical component of Sentinel 2 and Sentinel 3. This encompasses the next generation of the Multi Spectral Imager (MSI), Ocean and Land Color Imager (OLCI) and Sea and Land Surface Temperature Radiometer (SLSTR) observations. The aim of this activity was to analyse and trade-off different architectural options for the Next-Generation of the Copernicus Space Component optical imaging missions in the 2032 time horizon, considering user needs, addressing mainly the Copernicus Marine and Land services, starting from user requirements for Copernicus Next Generation derived from EC studies and related workshops. It also did draw from the experience and lessons learned regarding the current generation of Sentinel 2 and Sentinel 3, to ensure continuity of services and further enhancement as identified, necessary to meet new and emerging user needs. The study investigated also trends both in terms of other spaceborne optical missions by national agencies in Europe and worldwide, as well as commercial missions e.g. with the advent of “New Space” constellations of small satellites. Observation gaps and potential synergies were identified to avoid duplication when establishing the architecture of the next generation of the Copernicus Space Component optical imaging family for land and ocean applications. A wide range of scenarios have been analysed for possible combination of several observation capabilities within the same instrument, on the same platform or on satellites flying in formation, assessing pros and cons with respect to scenarios with free-flyer satellites for each observation capability. Based on the above analysis of user needs, gap/synergy analysis and architectural concept trade-offs, high level mission assumptions and technical requirements are being established for the continuity of the MSI, OLCI and SLSTR observations, including any additional elements, as identified, to meet user requirements in the respective Copernicus services and application areas.</p>

Author(s):  
K. Wong ◽  
C. Ellul

Despite significant developments, 3D technologies are still not fully exploited in practice due to the lack of awareness as well as the lack of understanding of who the users of 3D will be and what the user requirements are. From a National Mapping & Cadastral Agency and data acquisition perspective, each new 3D feature type and element within a feature added (such as doors, windows, chimneys, street lights) requires additional processing and cost to create. There is therefore a need to understand the importance of different 3D features and components for different applications. This will allow the direction of capture effort towards items that will be relevant to a wide range of users, as well as to understand the current status of, and interest in, 3D at a national level. This paper reports the results of an initial requirements gathering exercise for 3D geographic information in the United Kingdom (UK). It describes a user-centred design approach where usability and user needs are given extensive attention at each stage of the design process. Web-based questionnaires and semi-structured face-to-face interviews were used as complementary data collection methods to understand the user needs. The results from this initial study showed that while some applications lead the field with a high adoption of 3D, others are laggards, predominantly from organisational inertia. While individuals may be positive about the use of 3D, many struggle to justify the value and business case for 3D GI. Further work is required to identify the specific geometric and semantic requirements for different applications and to repeat the study with a larger sample.


Author(s):  
Konstantine P. Georgakakos ◽  
Theresa M. Modrick ◽  
Eylon Shamir ◽  
Rochelle Campbell ◽  
Zhengyang Cheng ◽  
...  

AbstractAt the beginning of the 21st Century a research-to-operations program was initiated to design and develop operational systems to support local forecasters in their challenging task to provide advance warning for flash floods worldwide. Twenty some years later, the Flash Flood Guidance System with global coverage provides real-time assessment and guidance products to more than 60 countries, serving nearly 3 billion people. The implementation domains cover a wide range of hydroclimatological, geomorphological and land-use regimes worldwide. This flexible and evolving system combines meteorology and hydrology data and concepts as well as supports product utility for flash-flood disaster mitigation on very large scales with high spatial and temporal resolution. Through quality control procedures, it integrates remotely-sensed data of land-surface precipitation and of land-surface properties from geostationary and polar orbiter satellite platforms, reflectivity data from a variety of weather radar systems, as well as asynchronous precipitation data from ground-based automated precipitation gauges, in order to produce assessments and short-term forecasts that support forecasters and disaster managers in real time. For each region, it also integrates mesoscale meteorological model forecasts with land-surface model response to produce longer-term guidance products. It contains components and interfaces that allow real-time forecaster adjustments to products based on local last-minute field information and relevant forecaster experience. Assessments of utility for flash flood warning operations by national forecasting agencies worldwide are positive. The article exemplifies the process of realization and evolution of the FFGS from research in interdisciplinary fields to operations in diverse environments, and discusses lessons learned.


2021 ◽  
Author(s):  
Nurul Aminah Mohd Azmi ◽  
Grant Veroba ◽  
Muhammad Aizuddin Zainalabidin

Abstract This paper provides a Case Study in Front End Project Realization digitalization from the Domain perspective, with a focus on the methodology used, process enhancements that were enabled through automation, and lessons learned during the transformation. The transformation has been an iterative process, first focusing on digitalizing modules within the Front End work process and evolving into a multi-discipline integrated digital application. Along the journey, application of Agile project strategies enabled continuous enhancements to be identified and implemented through lessons learned, formal design thinking reviews, new idea generation and informal engagements with other disciplines commencing their digital journey. The process enhancements include: New Ways of Working to seamlessly integrate Front End technical and cost analytics engines, and across broader enterprise digital Field Development processes.New Sources of Insight to expand ideation using cross industry learnings, maximize use of extensive internal project data, and embedded Best in Class benchmarking. The Front End digitalization process identified significant value to stakeholders through increased pace of delivery, improved early concept definition with limited human intervention, increased cost accuracy, and increased confidence in the results through replication, improved data supply and benchmarking rigor. Specific value unlocks are seen across Front End Loading (FEL) i.e., pre-FEL to FEL-2 stages and will be presented. Through incorporation of enhanced data and insights, improved cost compression and decision-making quality has also been identified and subsequently will improve the project economics. A number of challenges through the transformation process were identified. These included: integration or replacement of legacy technical and cost applications; identifying and digitalizing a wide range of internal engineering tools and data sources needed for a comprehensive digital Front End process; efficiency of translating technical requirements to the digital team through comprehensive mapping of design and experienced-based rules; and re-shaping Front End technical focus from deliverable generation to targeted assurance, value obsession and risk management. While major focus has been on the integration of internal technical and cost applications, significant challenges were also identified in integration of external applications and Application Programming Interface (API) readiness to allow interaction between the applications and the Front End digital application i.e., Concept Factory. And finally, challenges in achieving a high performance team with the right balance of Domain, translators and programmers will be discussed.


2019 ◽  
Vol 11 (9) ◽  
pp. 1124 ◽  
Author(s):  
David Frantz

Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution for the mass-processing and analysis of Landsat and Sentinel-2 image archives. FORCE is increasingly used to support a wide range of scientific to operational applications that are in need of both large area, as well as deep and dense temporal information. FORCE is capable of generating Level 2 ARD, and higher-level products. Level 2 processing is comprised of state-of-the-art cloud masking and radiometric correction (including corrections that go beyond ARD specification, e.g., topographic or bidirectional reflectance distribution function correction). It further includes data cubing, i.e., spatial reorganization of the data into a non-overlapping grid system for enhanced efficiency and simplicity of ARD usage. However, the usage barrier of Level 2 ARD is still high due to the considerable data volume and spatial incompleteness of valid observations (e.g., clouds). Thus, the higher-level modules temporally condense multi-temporal ARD into manageable amounts of spatially seamless data. For data mining purposes, per-pixel statistics of clear sky data availability can be generated. FORCE provides functionality for compiling best-available-pixel composites and spectral temporal metrics, which both utilize all available observations within a defined temporal window using selection and statistical aggregation techniques, respectively. These products are immediately fit for common Earth observation analysis workflows, such as machine learning-based image classification, and are thus referred to as highly analysis ready data (hARD). FORCE provides data fusion functionality to improve the spatial resolution of (i) coarse continuous fields like land surface phenology and (ii) Landsat ARD using Sentinel-2 ARD as prediction targets. Quality controlled time series preparation and analysis functionality with a number of aggregation and interpolation techniques, land surface phenology retrieval, and change and trend analyses are provided. Outputs of this module can be directly ingested into a geographic information system (GIS) to fuel research questions without any further processing, i.e., hARD+. FORCE is open source software under the terms of the GNU General Public License v. >= 3, and can be downloaded from http://force.feut.de.


Public Voices ◽  
2016 ◽  
Vol 14 (1) ◽  
pp. 115
Author(s):  
Mary Coleman

The author of this article argues that the two-decades-long litigation struggle was necessary to push the political actors in Mississippi into a more virtuous than vicious legal/political negotiation. The second and related argument, however, is that neither the 1992 United States Supreme Court decision in Fordice nor the negotiation provided an adequate riposte to plaintiffs’ claims. The author shows that their chief counsel for the first phase of the litigation wanted equality of opportunity for historically black colleges and universities (HBCUs), as did the plaintiffs. In the course of explicating the role of a legal grass-roots humanitarian, Coleman suggests lessons learned and trade-offs from that case/negotiation, describing the tradeoffs as part of the political vestiges of legal racism in black public higher education and the need to move HBCUs to a higher level of opportunity at a critical juncture in the life of tuition-dependent colleges and universities in the United States. Throughout the essay the following questions pose themselves: In thinking about the Road to Fordice and to political settlement, would the Justice Department lawyers and the plaintiffs’ lawyers connect at the point of their shared strength? Would the timing of the settlement benefit the plaintiffs and/or the State? Could plaintiffs’ lawyers hold together for the length of the case and move each piece of the case forward in a winning strategy? Who were plaintiffs’ opponents and what was their strategy? With these questions in mind, the author offers an analysis of how the campaign— political/legal arguments and political/legal remedies to remove the vestiges of de jure segregation in higher education—unfolded in Mississippi, with special emphasis on the initiating lawyer in Ayers v. Waller and Fordice, Isaiah Madison


BMJ Open ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. e049734
Author(s):  
Katya Galactionova ◽  
Maitreyi Sahu ◽  
Samuel Paul Gideon ◽  
Saravanakumar Puthupalayam Kaliappan ◽  
Chloe Morozoff ◽  
...  

ObjectiveTo present a costing study integrated within the DeWorm3 multi-country field trial of community-wide mass drug administration (cMDA) for elimination of soil-transmitted helminths.DesignTailored data collection instruments covering resource use, expenditure and operational details were developed for each site. These were populated alongside field activities by on-site staff. Data quality control and validation processes were established. Programmed routines were used to clean, standardise and analyse data to derive costs of cMDA and supportive activities.SettingField site and collaborating research institutions.Primary and secondary outcome measuresA strategy for costing interventions in parallel with field activities was discussed. Interim estimates of cMDA costs obtained with the strategy were presented for one of the trial sites.ResultsThe study demonstrated that it was both feasible and advantageous to collect data alongside field activities. Practical decisions on implementing the strategy and the trade-offs involved varied by site; trialists and local partners were key to tailoring data collection to the technical and operational realities in the field. The strategy capitalised on the established processes for routine financial reporting at sites, benefitted from high recall and gathered operational insight that facilitated interpretation of the estimates derived. The methodology produced granular costs that aligned with the literature and allowed exploration of relevant scenarios. In the first year of the trial, net of drugs, the incremental financial cost of extending deworming of school-aged children to the whole community in India site averaged US$1.14 (USD, 2018) per person per round. A hypothesised at-scale routine implementation scenario yielded a much lower estimate of US$0.11 per person treated per round.ConclusionsWe showed that costing interventions alongside field activities offers unique opportunities for collecting rich data to inform policy toward optimising health interventions and for facilitating transfer of economic evidence from the field to the programme.Trial registration numberNCT03014167; Pre-results.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
George Gillard ◽  
Ian M. Griffiths ◽  
Gautham Ragunathan ◽  
Ata Ulhaq ◽  
Callum McEwan ◽  
...  

AbstractCombining external control with long spin lifetime and coherence is a key challenge for solid state spin qubits. Tunnel coupling with electron Fermi reservoir provides robust charge state control in semiconductor quantum dots, but results in undesired relaxation of electron and nuclear spins through mechanisms that lack complete understanding. Here, we unravel the contributions of tunnelling-assisted and phonon-assisted spin relaxation mechanisms by systematically adjusting the tunnelling coupling in a wide range, including the limit of an isolated quantum dot. These experiments reveal fundamental limits and trade-offs of quantum dot spin dynamics: while reduced tunnelling can be used to achieve electron spin qubit lifetimes exceeding 1 s, the optical spin initialisation fidelity is reduced below 80%, limited by Auger recombination. Comprehensive understanding of electron-nuclear spin relaxation attained here provides a roadmap for design of the optimal operating conditions in quantum dot spin qubits.


2021 ◽  
Vol 11 (13) ◽  
pp. 5859
Author(s):  
Fernando N. Santos-Navarro ◽  
Yadira Boada ◽  
Alejandro Vignoni ◽  
Jesús Picó

Optimal gene expression is central for the development of both bacterial expression systems for heterologous protein production, and microbial cell factories for industrial metabolite production. Our goal is to fulfill industry-level overproduction demands optimally, as measured by the following key performance metrics: titer, productivity rate, and yield (TRY). Here we use a multiscale model incorporating the dynamics of (i) the cell population in the bioreactor, (ii) the substrate uptake and (iii) the interaction between the cell host and expression of the protein of interest. Our model predicts cell growth rate and cell mass distribution between enzymes of interest and host enzymes as a function of substrate uptake and the following main lab-accessible gene expression-related characteristics: promoter strength, gene copy number and ribosome binding site strength. We evaluated the differential roles of gene transcription and translation in shaping TRY trade-offs for a wide range of expression levels and the sensitivity of the TRY space to variations in substrate availability. Our results show that, at low expression levels, gene transcription mainly defined TRY, and gene translation had a limited effect; whereas, at high expression levels, TRY depended on the product of both, in agreement with experiments in the literature.


2021 ◽  
Vol 13 (7) ◽  
pp. 1340
Author(s):  
Shuailong Feng ◽  
Shuguang Liu ◽  
Lei Jing ◽  
Yu Zhu ◽  
Wende Yan ◽  
...  

Highways provide key social and economic functions but generate a wide range of environmental consequences that are poorly quantified and understood. Here, we developed a before–during–after control-impact remote sensing (BDACI-RS) approach to quantify the spatial and temporal changes of environmental impacts during and after the construction of the Wujing Highway in China using three buffer zones (0–100 m, 100–500 m, and 500–1000 m). Results showed that land cover composition experienced large changes in the 0–100 m and 100–500 m buffers while that in the 500–1000 m buffer was relatively stable. Vegetation and moisture conditions, indicated by the normalized difference vegetation index (NDVI) and the normalized difference moisture index (NDMI), respectively, demonstrated obvious degradation–recovery trends in the 0–100 m and 100–500 m buffers, while land surface temperature (LST) experienced a progressive increase. The maximal relative changes as annual means of NDVI, NDMI, and LST were about −40%, −60%, and 12%, respectively, in the 0–100m buffer. Although the mean values of NDVI, NDMI, and LST in the 500–1000 m buffer remained relatively stable during the study period, their spatial variabilities increased significantly after highway construction. An integrated environment quality index (EQI) showed that the environmental impact of the highway manifested the most in its close proximity and faded away with distance. Our results showed that the effect distance of the highway was at least 1000 m, demonstrated from the spatial changes of the indicators (both mean and spatial variability). The approach proposed in this study can be readily applied to other regions to quantify the spatial and temporal changes of disturbances of highway systems and subsequent recovery.


Sign in / Sign up

Export Citation Format

Share Document