scholarly journals RECENT RESULTS OF THE FIREBIRD MISSION

Author(s):  
E. Lorenz ◽  
W. Halle ◽  
C. Fischer ◽  
N. Mettig ◽  
D. Klein

Two years ago the German Aerospace Center (DLR) reported on the ISRSE36 in Berlin about the FireBird Mission (Lorenz, 2015). FireBird is a constellation of two small satellites equipped with a unique Bi- Spectral Infrared Instrument. Whereas this instrumentation is mainly dedicated to the investigation of high temperature events a much wider application field was meanwhile examined. The first of these satellites &amp;ndash; TET-1 &amp;ndash; was launched on July 22<sup>nd</sup> 2012. On the ISRSE36 could be presented the results of two years of operation. The second satellite &amp;ndash; BIROS &amp;ndash; was launched on June 22<sup>nd</sup> 2016. The outstanding feature of the Infrared Instruments is their higher ground sampling resolution and dynamic range compared to systems such as MODIS. This allows the detection of smaller fire events and improves the quality of the quantitative analysis. The detailed analysis of the large number of data sets acquired by TET in the last two years caused significant methodically improvements in the data processing. Whereas BIROS has the same instrumentation as TET a number of additional technological features implemented in the satellite bus expand the application field of the instruments remarkably. New High-Torque-Wheels will allow new scanning modes and generating with this new data products to be discussed. A Giga Bit Laser downlink terminal will enable near real time downlinks of large data volumes reducing the response time to disaster events. With an advanced on board processing unit, it is possible to reduce the data stream to a dedicated list of desired parameters to be sent to users by an OrbCom modem. This technology can serve such Online Information Portals like the Advanced Fire Information System (<i>AFIS</i>) in South Africa. The paper will focus on these new items of the FireBird mission.

Author(s):  
Valery Sklyarov ◽  
Iouliia Skliarova ◽  
Artjom Rjabov ◽  
Alexander Sudnitson

Computing and filtering sorted subsets are frequently required in statistical data manipulation and control applications. The main objective is to extract subsets from large data sets in accordance with some criteria, for example, with the maximum and/or the minimum values in the entire set or within the predefined constraints. The paper suggests a new computation method enabling the indicated above problem to be solved in all programmable systems-on-chip from the Xilinx Zynq family that combine a dual-core Cortex-A9 processing unit and programmable logic linked by high-performance interfaces. The method involves highly parallel sorting networks and run-time filtering. The computations are done in communicating software, running in the processing unit, and hardware, implemented in the programmable logic. Practical applications of the proposed technique are also shown. The results of implementation and experiments clearly demonstrate significant speed-up of the developed software/hardware system comparing to alternative software implementations.


2020 ◽  
pp. 81-93
Author(s):  
D. V. Shalyapin ◽  
D. L. Bakirov ◽  
M. M. Fattakhov ◽  
A. D. Shalyapina ◽  
A. V. Melekhov ◽  
...  

The article is devoted to the quality of well casing at the Pyakyakhinskoye oil and gas condensate field. The issue of improving the quality of well casing is associated with many problems, for example, a large amount of work on finding the relationship between laboratory studies and actual data from the field; the difficulty of finding logically determined relationships between the parameters and the final quality of well casing. The text gives valuable information on a new approach to assessing the impact of various parameters, based on a mathematical apparatus that excludes subjective expert assessments, which in the future will allow applying this method to deposits with different rock and geological conditions. We propose using the principles of mathematical processing of large data sets applying neural networks trained to predict the characteristics of the quality of well casing (continuity of contact of cement with the rock and with the casing). Taking into account the previously identified factors, we developed solutions to improve the tightness of the well casing and the adhesion of cement to the limiting surfaces.


Author(s):  
Bindi Varghese

Commitment to technology, sustainability, innovation, and accessibility has not only improved the quality of life but has also created a niche for luxury tourism with smart tourism eco-space. Smart tourism destinations (STD) need vigorous, well-connected stakeholders with the help of a technological platform for data exchange. Instant data exchange creates extremely large data sets known as big data. This chapter aims to contribute to the understanding on how smart tourism destinations could potentially enhance luxury tourism that is more personalized to meet visitors' unique needs and preferences. Developing smart tourism destinations can be an effective mode to engage all the stakeholders and tourists. In improving destination performance, smart tourism ecosystem applies social media analytics and smart tourism technologies. This chapter aims to relate local area as smart tourism local service systems (S-TLSS) and luxury tourism.


2019 ◽  
Author(s):  
Anna C. Gilbert ◽  
Alexander Vargo

AbstractHere, we evaluate the performance of a variety of marker selection methods on scRNA-seq UMI counts data. We test on an assortment of experimental and synthetic data sets that range in size from several thousand to one million cells. In addition, we propose several performance measures for evaluating the quality of a set of markers when there is no known ground truth. According to these metrics, most existing marker selection methods show similar performance on experimental scRNA-seq data; thus, the speed of the algorithm is the most important consid-eration for large data sets. With this in mind, we introduce RANKCORR, a fast marker selection method with strong mathematical underpinnings that takes a step towards sensible multi-class marker selection.


2019 ◽  
Vol 12 (1) ◽  
pp. 34-40
Author(s):  
Mareeswari Venkatachalaappaswamy ◽  
Vijayan Ramaraj ◽  
Saranya Ravichandran

Background: In many modern applications, information filtering is now used that exposes users to a collection of data. In such systems, the users are provided with recommended items’ list they might prefer or predict the rate that they might prefer for the items. So that, the users might be select the items that are preferred in that list. Objective: In web service recommendation based on Quality of Service (QoS), predicting QoS value will greatly help people to select the appropriate web service and discover new services. Methods: The effective method or technique for this would be Collaborative Filtering (CF). CF will greatly help in service selection and web service recommendation. It is the more general way of information filtering among the large data sets. In the narrower sense, it is the method of making predictions about a user’s interest by collecting taste information from many users. Results: It is easy to build and also much more effective for recommendations by predicting missing QoS values for the users. It also addresses the scalability problem since the recommendations are based on like-minded users using PCC or in clusters using KNN rather than in large data sources. Conclusion: In this paper, location-aware collaborative filtering is used to recommend the services. The proposed system compares the prediction outcomes and execution time with existing algorithms.


2017 ◽  
Vol 17 (5) ◽  
pp. 68-80
Author(s):  
Armen Poghosyan ◽  
Hrachya Astsatryan ◽  
Wahi Narsisian ◽  
Yevgeni Mamasakhlisov

Abstract High Performance Computing (HPC) accelerates life science discoveries by enabling scientists to analyze large data sets, to develop detailed models of entire biological systems and to simulate complex biological processes. As computational experiments, molecular dynamics simulations are widely used in life sciences to evaluate the equilibrium nature of classical many-body systems The modelling and molecular dynamics study of surfactant, polymer solutions and the stability of proteins and nucleic acids under different conditions, as well as deoxyribonucleic acid proteins are studied. The study aims to understand the scaling behavior of Gromacs (Groningen machine for chemical simulations) on various platforms, and the maximum performance in the prospect of energy consumption that can be accomplished by tuning the hardware and software parameters. Different system sizes (48K, 64K, and 272K) from scientific investigations have been studied show that the GPU (Graphics Processing Unit) scales rather beneficial than other resources, i.e., with GPU support. We track 2-3 times speedup compared to the latest multi-core CPUs. However, the so-called “threading effect” leads to the better results.


2020 ◽  
Author(s):  
◽  
Dylan G Rees

The contact centre industry employs 4% of the entire United King-dom and United States’ working population and generates gigabytes of operational data that require analysis, to provide insight and to improve efficiency. This thesis is the result of a collaboration with QPC Limited who provide data collection and analysis products for call centres. They provided a large data-set featuring almost 5 million calls to be analysed. This thesis utilises novel visualisation techniques to create tools for the exploration of the large, complex call centre data-set and to facilitate unique observations into the data.A survey of information visualisation books is presented, provid-ing a thorough background of the field. Following this, a feature-rich application that visualises large call centre data sets using scatterplots that support millions of points is presented. The application utilises both the CPU and GPU acceleration for processing and filtering and is exhibited with millions of call events.This is expanded upon with the use of glyphs to depict agent behaviour in a call centre. A technique is developed to cluster over-lapping glyphs into a single parent glyph dependant on zoom level and a customizable distance metric. This hierarchical glyph repre-sents the mean value of all child agent glyphs, removing overlap and reducing visual clutter. A novel technique for visualising individually tailored glyphs using a Graphics Processing Unit is also presented, and demonstrated rendering over 100,000 glyphs at interactive frame rates. An open-source code example is provided for reproducibility.Finally, a novel interaction and layout method is introduced for improving the scalability of chord diagrams to visualise call transfers. An exploration of sketch-based methods for showing multiple links and direction is made, and a sketch-based brushing technique for filtering is proposed. Feedback from domain experts in the call centre industry is reported for all applications developed.


Author(s):  
Brad Morantz

Mining a large data set can be time consuming, and without constraints, the process could generate sets or rules that are invalid or redundant. Some methods, for example clustering, are effective, but can be extremely time consuming for large data sets. As the set grows in size, the processing time grows exponentially. In other situations, without guidance via constraints, the data mining process might find morsels that have no relevance to the topic or are trivial and hence worthless. The knowledge extracted must be comprehensible to experts in the field. (Pazzani, 1997) With time-ordered data, finding things that are in reverse chronological order might produce an impossible rule. Certain actions always precede others. Some things happen together while others are mutually exclusive. Sometimes there are maximum or minimum values that can not be violated. Must the observation fit all of the requirements or just most. And how many is “most?” Constraints attenuate the amount of output (Hipp & Guntzer, 2002). By doing a first-stage constrained mining, that is, going through the data and finding records that fulfill certain requirements before the next processing stage, time can be saved and the quality of the results improved. The second stage also might contain constraints to further refine the output. Constraints help to focus the search or mining process and attenuate the computational time. This has been empirically proven to improve cluster purity. (Wagstaff & Cardie, 2000)(Hipp & Guntzer, 2002) The theory behind these results is that the constraints help guide the clustering, showing where to connect, and which ones to avoid. The application of user-provided knowledge, in the form of constraints, reduces the hypothesis space and can reduce the processing time and improve the learning quality.


1990 ◽  
Vol 6 (2) ◽  
pp. 220-228 ◽  
Author(s):  
Robert W. Dubois

AbstractModeling death rates has been suggested as a potential method to screen hospitals and identify superior and substandard providers. This article begins with a review of one hospital death rate study and focuses upon its findings and limitations. It also explores the inherent limitations in the use of large data sets to assess quality of care.


1990 ◽  
Vol 6 (2) ◽  
pp. 229-238 ◽  
Author(s):  
Susan Desharnais

AbstractThis article examines how large data sets can be used for evaluating the effects of health policy changes and for flagging providers with potential quality problems. An example is presented, illustrating how three risk-adjusted measures of hospital performance were developed using patient discharge abstracts. Advantages and disadvantage of this approach are discussed.


Sign in / Sign up

Export Citation Format

Share Document