scholarly journals Petrochemical Production Big Data and its Four Typical Application Paradigms

Author(s):  
Hu Shaolin ◽  
Zhang Qinghua ◽  
Su Naiquan ◽  
Li Xiwu

In recent years, the big data has attracted more and more attention. It can bring us more information and broader perspective to analyse and deal with problems than the conventional situation. However, so far, there is no widely acceptable and measurable definition for the term “big data”. For example, what significant features a data set needs to have can be called big data, and how large a data set is can be called big data, and so on. Although the "5V" description widely used in textbooks has been tried to solve the above problems in many big data literatures, "5V" still has significant shortcomings and limitations, and is not suitable for completely describing big data problems in practical fields such as industrial production. Therefore, this paper creatively puts forward the new concept of data cloud and the data cloud-based "3M" descriptive definition of big data, which refers to a wide range of data sources (Multisource), ultra-high dimensions (Multi-dimensional) and a long enough time span (Multi-spatiotemporal). Based on the 3M description of big data, this paper sets up four typical application paradigms for the production big data, analyses the typical application of four paradigms of big data, and lays the foundation for applications of big data from petrochemical industry.

2020 ◽  
Vol 1 (1) ◽  
pp. 35-42
Author(s):  
Péter Ekler ◽  
Dániel Pásztor

Összefoglalás. A mesterséges intelligencia az elmúlt években hatalmas fejlődésen ment keresztül, melynek köszönhetően ma már rengeteg különböző szakterületen megtalálható valamilyen formában, rengeteg kutatás szerves részévé vált. Ez leginkább az egyre inkább fejlődő tanulóalgoritmusoknak, illetve a Big Data környezetnek köszönhető, mely óriási mennyiségű tanítóadatot képes szolgáltatni. A cikk célja, hogy összefoglalja a technológia jelenlegi állapotát. Ismertetésre kerül a mesterséges intelligencia történelme, az alkalmazási területek egy nagyobb része, melyek központi eleme a mesterséges intelligencia. Ezek mellett rámutat a mesterséges intelligencia különböző biztonsági réseire, illetve a kiberbiztonság területén való felhasználhatóságra. A cikk a jelenlegi mesterséges intelligencia alkalmazások egy szeletét mutatja be, melyek jól illusztrálják a széles felhasználási területet. Summary. In the past years artificial intelligence has seen several improvements, which drove its usage to grow in various different areas and became the focus of many researches. This can be attributed to improvements made in the learning algorithms and Big Data techniques, which can provide tremendous amount of training. The goal of this paper is to summarize the current state of artificial intelligence. We present its history, introduce the terminology used, and show technological areas using artificial intelligence as a core part of their applications. The paper also introduces the security concerns related to artificial intelligence solutions but also highlights how the technology can be used to enhance security in different applications. Finally, we present future opportunities and possible improvements. The paper shows some general artificial intelligence applications that demonstrate the wide range usage of the technology. Many applications are built around artificial intelligence technologies and there are many services that a developer can use to achieve intelligent behavior. The foundation of different approaches is a well-designed learning algorithm, while the key to every learning algorithm is the quality of the data set that is used during the learning phase. There are applications that focus on image processing like face detection or other gesture detection to identify a person. Other solutions compare signatures while others are for object or plate number detection (for example the automatic parking system of an office building). Artificial intelligence and accurate data handling can be also used for anomaly detection in a real time system. For example, there are ongoing researches for anomaly detection at the ZalaZone autonomous car test field based on the collected sensor data. There are also more general applications like user profiling and automatic content recommendation by using behavior analysis techniques. However, the artificial intelligence technology also has security risks needed to be eliminated before applying an application publicly. One concern is the generation of fake contents. These must be detected with other algorithms that focus on small but noticeable differences. It is also essential to protect the data which is used by the learning algorithm and protect the logic flow of the solution. Network security can help to protect these applications. Artificial intelligence can also help strengthen the security of a solution as it is able to detect network anomalies and signs of a security issue. Therefore, the technology is widely used in IT security to prevent different type of attacks. As different BigData technologies, computational power, and storage capacity increase over time, there is space for improved artificial intelligence solution that can learn from large and real time data sets. The advancements in sensors can also help to give more precise data for different solutions. Finally, advanced natural language processing can help with communication between humans and computer based solutions.


Author(s):  
Benedikt Gräler ◽  
Andrea Petroselli ◽  
Salvatore Grimaldi ◽  
Bernard De Baets ◽  
Niko Verhoest

Abstract. Many hydrological studies are devoted to the identification of events that are expected to occur on average within a certain time span. While this topic is well established in the univariate case, recent advances focus on a multivariate characterization of events based on copulas. Following a previous study, we show how the definition of the survival Kendall return period fits into the set of multivariate return periods.Moreover, we preliminary investigate the ability of the multivariate return period definitions to select maximal events from a time series. Starting from a rich simulated data set, we show how similar the selection of events from a data set is. It can be deduced from the study and theoretically underpinned that the strength of correlation in the sample influences the differences between the selection of maximal events.


2020 ◽  
Vol 27 (3) ◽  
Author(s):  
Fernando Celso de Campos ◽  
Alceu Gomes Alves Filho

Abstract: The production strategy is an overall pattern of decisions and actions that defines the role, objectives and production activities supporting a business strategy. Expanded to a general pattern of decisions determining the long-term skills and their contributions to the global strategy combining market requirements and resources. Currently, from the whole evolution of the internet and its related technologies, there are a wide range of data available in various formats and storage places. This voluminous data set available, called Big Data, can support the understandings and managerial and strategic visions in a way interesting, effective and relevant, since they combine some factors and tools. Therefore, the objective of this article is to present a proposal for a framework that supports the strategy of production via use of aspects of Big Data. The method of research was conducted in three stages: i) literature search in 5 steps; ii)elaboration of a theoretical-conceptual framework (BD-ProdStrateg) in 5 steps; and iii)application of illustration of the proposed framework (BD-ProdStrateg). The main contribution was the systematization of a proposal using a theoretical right to authors of consolidated production strategy and on the other hand the latest reference about Big Data and its possibilities to be explored. Illustrative application generated an expectation of continuity and search for technological means to check on how long the main problems would be resolved and what degree of economics from this deployment.


Author(s):  
Denis Tikhomirov

The purpose of the article is to typologize terminological definitions of security, to find out the general, to identify the originality of their interpretations depending on the subject of legal regulation. The methodological basis of the study is the methods that made it possible to obtain valid conclusions, in particular, the method of comparison, through which it became possible to correlate different interpretations of the term "security"; method of hermeneutics, which allowed to elaborate texts of normative legal acts of Ukraine, method of typologization, which made it possible to create typologization groups of variants of understanding of the term "security". Scientific novelty. The article analyzes the understanding of the term "security" in various regulatory acts in force in Ukraine. Typological groups were understood to understand the term "security". Conclusions. The analysis of the legal material makes it possible to confirm that the issues of security are within the scope of both legislative regulation and various specialized by-laws. However, today there is no single conception on how to interpret security terminology. This is due both to the wide range of social relations that are the subject of legal regulation and to the relativity of the notion of security itself and the lack of coherence of views on its definition in legal acts and in the scientific literature. The multiplicity of definitions is explained by combinations of material and procedural understanding, static - dynamic, and conditioned by the peculiarities of a particular branch of legal regulation, limited ability to use methods of one or another branch, the inter-branch nature of some variations of security, etc. Separation, common and different in the definition of "security" can be used to further standardize, in fact, the regulatory legal understanding of security to more effectively implement the legal regulation of the security direction.


Author(s):  
Tim Rutherford-Johnson

By the start of the 21st century many of the foundations of postwar culture had disappeared: Europe had been rebuilt and, as the EU, had become one of the world’s largest economies; the United States’ claim to global dominance was threatened; and the postwar social democratic consensus was being replaced by market-led neoliberalism. Most importantly of all, the Cold War was over, and the World Wide Web had been born. Music After The Fall considers contemporary musical composition against this changed backdrop, placing it in the context of globalization, digitization, and new media. Drawing on theories from the other arts, in particular art and architecture, it expands the definition of Western art music to include forms of composition, experimental music, sound art, and crossover work from across the spectrum, inside and beyond the concert hall. Each chapter considers a wide range of composers, performers, works, and institutions are considered critically to build up a broad and rich picture of the new music ecosystem, from North American string quartets to Lebanese improvisers, from South American electroacoustic studios to pianos in the Australian outback. A new approach to the study of contemporary music is developed that relies less on taxonomies of style and technique, and more on the comparison of different responses to common themes, among them permission, fluidity, excess, and loss.


2019 ◽  
Vol 16 (7) ◽  
pp. 808-817 ◽  
Author(s):  
Laxmi Banjare ◽  
Sant Kumar Verma ◽  
Akhlesh Kumar Jain ◽  
Suresh Thareja

Background: In spite of the availability of various treatment approaches including surgery, radiotherapy, and hormonal therapy, the steroidal aromatase inhibitors (SAIs) play a significant role as chemotherapeutic agents for the treatment of estrogen-dependent breast cancer with the benefit of reduced risk of recurrence. However, due to greater toxicity and side effects associated with currently available anti-breast cancer agents, there is emergent requirement to develop target-specific AIs with safer anti-breast cancer profile. Methods: It is challenging task to design target-specific and less toxic SAIs, though the molecular modeling tools viz. molecular docking simulations and QSAR have been continuing for more than two decades for the fast and efficient designing of novel, selective, potent and safe molecules against various biological targets to fight the number of dreaded diseases/disorders. In order to design novel and selective SAIs, structure guided molecular docking assisted alignment dependent 3D-QSAR studies was performed on a data set comprises of 22 molecules bearing steroidal scaffold with wide range of aromatase inhibitory activity. Results: 3D-QSAR model developed using molecular weighted (MW) extent alignment approach showed good statistical quality and predictive ability when compared to model developed using moments of inertia (MI) alignment approach. Conclusion: The explored binding interactions and generated pharmacophoric features (steric and electrostatic) of steroidal molecules could be exploited for further design, direct synthesis and development of new potential safer SAIs, that can be effective to reduce the mortality and morbidity associated with breast cancer.


Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Author(s):  
Branka Vulesevic ◽  
Naozumi Kubota ◽  
Ian G Burwash ◽  
Claire Cimadevilla ◽  
Sarah Tubiana ◽  
...  

Abstract Aims Severe aortic valve stenosis (AS) is defined by an aortic valve area (AVA) <1 cm2 or an AVA indexed to body surface area (BSA) <0.6 cm/m2, despite little evidence supporting the latter approach and important intrinsic limitations of BSA indexation. We hypothesized that AVA indexed to height (H) might be more applicable to a wide range of populations and body morphologies and might provide a better predictive accuracy. Methods and results In 1298 patients with degenerative AS and preserved ejection fraction from three different countries and continents (derivation cohort), we aimed to establish an AVA/H threshold that would be equivalent to 1.0 cm2 for defining severe AS. In a distinct prospective validation cohort of 395 patients, we compared the predictive accuracy of AVA/BSA and AVA/H. Correlations between AVA and AVA/BSA or AVA/H were excellent (all R2 > 0.79) but greater with AVA/H. Regressions lines were markedly different in obese and non-obese patients with AVA/BSA (P < 0.0001) but almost identical with AVA/H (P = 0.16). AVA/BSA values that corresponded to an AVA of 1.0 cm2 were markedly different in obese and non-obese patients (0.48 and 0.59 cm2/m2) but not with AVA/H (0.61 cm2/m for both). Agreement for the diagnosis of severe AS (AVA < 1 cm2) was significantly higher with AVA/H than with AVA/BSA (P < 0.05). Similar results were observed across the three countries. An AVA/H cut-off value of 0.6 cm2/m [HR = 8.2(5.6–12.1)] provided the best predictive value for the occurrence of AS-related events [absolute AVA of 1 cm2: HR = 7.3(5.0–10.7); AVA/BSA of 0.6 cm2/m2 HR = 6.7(4.4–10.0)]. Conclusion In a large multinational/multiracial cohort, AVA/H was better correlated with AVA than AVA/BSA and a cut-off value of 0.6 cm2/m provided a better diagnostic and prognostic value than 0.6 cm2/m2. Our results suggest that severe AS should be defined as an AVA < 1 cm2 or an AVA/H < 0.6 cm2/m rather than a BSA-indexed value of 0.6 cm2/m2.


Sign in / Sign up

Export Citation Format

Share Document