scholarly journals Technological Characteristic of Futures Based on Virtual Assets

2021 ◽  
Vol 4 ◽  
pp. 113-116
Author(s):  
Evhen Nevmerzhytsky ◽  
Mykola Yeshchenko

A virtual asset is a type of asset which does not have a material representation, although its value is reflected in a real currency. Due to their nature, the price of digital assets is usually highly volatile, especially with futures, which are derivative financial contracts. This is the most important contributing factor to the problem of the low usability of digital-based contracts in enterprise operations.Previously existing virtual assets included photography, logos, illustrations, animations, audiovisual media, etc. However, virtually all of such assets required a third-party platform for exchange to currency. The necessity of having a trusted by both sides mediator greatly limited the ease of use, and ultimately restricted the number of such transactions. Still, popularity of digital assets only grew, as evidenced by an explosive growth of software applications in the 2000s, as well as blockchain-based asset space in the 2010s.The newest and most promising solution developed is based on cryptoassets. Underlying usage of block- chain technology for the transactions checking and storage ensures clarity in virtual assets’ value history. Smart contracts written for the Ethereum platform, as an example, provide a highly trustful way of express- ing predefined conditions of a certain transaction. This allows safe and calculated enterprise usage, and also eliminates the need of having a mutually trusted third-party. The transactions are fully automated and happen at the same time as the pre-defined external conditions are met.Ethereum was chosen as an exemplary platform due to its high flexibility and amount of existing development. Even now, further advancements are being explored by its founder and community. Besides Ether, it is also used nоn-fungible tokens, decentralized finance, and enterprise blockchain solutions. Another important point is how much more nature friendly it is compared to main competitors, due to energy-efficiency of the mining process, enforced by the platform itself. This makes it ideal for responsible usage as well as further research.This article explores the digital assets usage, as well as explains cryptoassets technological background, in order to highlight the recent developments in the area of futures based on virtual assets, using certain Ether implementation as an example, which offers perpetual futures.

Author(s):  
Brian Stokes

Background with rationaleBusiness Intelligence (BI) software applications collect and process large amounts of data from one or more sources, and for a variety of purposes. These can include generating operational or sales reports, developing dashboards and data visualisations, and for ad-hoc analysis and querying of enterprise databases. Main AimBusiness Intelligence (BI) software applications collect and process large amounts of data from one or more sources, and for a variety of purposes. These can include generating operational or sales reports, developing dashboards and data visualisations, and for ad-hoc analysis and querying of enterprise databases. Methods/ApproachIn deciding to develop a series of dashboards to visually represent data stored in its MLM, the TDLU identified routine requests for these data and critically examined existing techniques for extracting data from its MLM. Traditionally Structured Query Language (SQL) queries were developed and used for a single purpose. By critically analysing limitations with this approach, the TDLU identified the power of BI tools and ease of use for both technical and non-technical staff. ResultsImplementing a BI tool is enabling quick and accurate production of a comprehensive array of information. Such information assists with cohort size estimation, producing data for routine and ad-hoc reporting, identifying data quality issues, and to answer questions from prospective users of linked data services including instantly producing estimates of links stored across disparate datasets. Conclusion BI tools are not traditionally considered integral to the operations of data linkage units. However, the TDLU has successfully applied the use of a BI tool to enable a rich set of data locked in its MLM to be quickly made available in multiple, easy to use formats and by technical and non-technical staff.


Author(s):  
Carlotta Domeniconi ◽  
Dimitrios Gunopulos

Pattern classification is a very general concept with numerous applications ranging from science, engineering, target marketing, medical diagnosis and electronic commerce to weather forecast based on satellite imagery. A typical application of pattern classification is mass mailing for marketing. For example, credit card companies often mail solicitations to consumers. Naturally, they would like to target those consumers who are most likely to respond. Often, demographic information is available for those who have responded previously to such solicitations, and this information may be used in order to target the most likely respondents. Another application is electronic commerce of the new economy. E-commerce provides a rich environment to advance the state-of-the-art in classification because it demands effective means for text classification in order to make rapid product and market recommendations. Recent developments in data mining have posed new challenges to pattern classification. Data mining is a knowledge discovery process whose aim is to discover unknown relationships and/or patterns from a large set of data, from which it is possible to predict future outcomes. As such, pattern classification becomes one of the key steps in an attempt to uncover the hidden knowledge within the data. The primary goal is usually predictive accuracy, with secondary goals being speed, ease of use, and interpretability of the resulting predictive model. While pattern classification has shown promise in many areas of practical significance, it faces difficult challenges posed by real world problems, of which the most pronounced is Bellman’s curse of dimensionality: it states the fact that the sample size required to perform accurate prediction on problems with high dimensionality is beyond feasibility. This is because in high dimensional spaces data become extremely sparse and are apart from each other. As a result, severe bias that affects any estimation process can be introduced in a high dimensional feature space with finite samples. Learning tasks with data represented as a collection of a very large number of features abound. For example, microarrays contain an overwhelming number of genes relative to the number of samples. The Internet is a vast repository of disparate information growing at an exponential rate. Efficient and effective document retrieval and classification systems are required to turn the ocean of bits around us into useful information, and eventually into knowledge. This is a challenging task, since a word level representation of documents easily leads 30000 or more dimensions. This chapter discusses classification techniques to mitigate the curse of dimensionality and reduce bias, by estimating feature relevance and selecting features accordingly. This issue has both theoretical and practical relevance, since many applications can benefit from improvement in prediction performance.


2009 ◽  
Vol 1 (4) ◽  
pp. 51-71 ◽  
Author(s):  
Suleiman Almasri ◽  
Muhammad Alnabhan ◽  
Ziad Hunaiti ◽  
Eliamani Sedoyeka

Pedestrians LBS are accessible by hand-held devices and become a large field of energetic research since the recent developments in wireless communication, mobile technologies and positioning techniques. LBS applications provide services like finding the neighboring facility within a certain area such as the closest restaurants, hospital, or public telephone. With the increased demand for richer mobile services, LBS propose a promising add-on to the current services offered by network operators and third-party service providers such as multimedia contents. The performance of LBS systems is directly affected by each component forming its architecture. Firstly, the end-user mobile device is still experiencing a lack of enough storage, limitations in CPU capabilities and short battery lifetime. Secondly, the mobile wireless network is still having problems with the size of bandwidth, packet loss, congestions and delay. Additionally, in spite of the fact that GPS is the most accurate navigation system, there are still some issues in micro scale navigation, mainly availability and accuracy. Finally, LBS server which hosts geographical and users information is experiencing difficulties in managing the huge size of data which causes a long query processing time. This paper presents a technical investigation and analysis of the performance of each component of LBS system for pedestrian navigation, through conducting several experimental tests in different locations. The results of this investigation have pinpointed the weaknesses of the system in micro-scale environments. In addition, this paper proposes a group of solutions and recommendations for most of these shortcomings.


2019 ◽  
Vol 62 (12) ◽  
pp. 1849-1862
Author(s):  
San Ling ◽  
Khoa Nguyen ◽  
Huaxiong Wang ◽  
Juanyang Zhang

Abstract Efficient user revocation is a necessary but challenging problem in many multi-user cryptosystems. Among known approaches, server-aided revocation yields a promising solution, because it allows to outsource the major workloads of system users to a computationally powerful third party, called the server, whose only requirement is to carry out the computations correctly. Such a revocation mechanism was considered in the settings of identity-based encryption and attribute-based encryption by Qin et al. (2015, ESORICS) and Cui et al. (2016, ESORICS ), respectively. In this work, we consider the server-aided revocation mechanism in the more elaborate setting of predicate encryption (PE). The latter, introduced by Katz et al. (2008, EUROCRYPT), provides fine-grained and role-based access to encrypted data and can be viewed as a generalization of identity-based and attribute-based encryption. Our contribution is 2-fold. First, we formalize the model of server-aided revocable PE (SR-PE), with rigorous definitions and security notions. Our model can be seen as a non-trivial adaptation of Cui et al.’s work into the PE context. Second, we put forward a lattice-based instantiation of SR-PE. The scheme employs the PE scheme of Agrawal et al. (2011, ASIACRYPT) and the complete subtree method of Naor et al. (2001, CRYPTO) as the two main ingredients, which work smoothly together thanks to a few additional techniques. Our scheme is proven secure in the standard model (in a selective manner), based on the hardness of the learning with errors problem.


2019 ◽  
Vol 68 (3) ◽  
pp. 641-658
Author(s):  
Harrison Smith

The location analytics industry has the potential to stimulate critical sociological discussions concerning the credibility of data analytics to enact new spatial classifications and metrics of socio-economic phenomena. Key debates in the sociology of geodemographics are revisited in this article in light of recent developments in algorithmic culture to understand how location analytics impacts the structural contexts of classification and relevance in digital marketing. It situates this within a locative imaginary, where marketers are experimenting with consolidating the epistemes of behavioural targeting, classification and performance evaluation in urban environments through spatial analytics of movement. This opens up future research into the political and cultural economies of relevance in media landscapes and the social shaping of valuable subjects by third-party data brokers and analytics platforms that have become matters of public and regulatory concern.


2014 ◽  
Vol 891-892 ◽  
pp. 702-707
Author(s):  
Chris Wallbrink ◽  
Wei Ping Hu

A computer program for fatigue life and crack growth analysis, entitled CGAP, has been developed at the Defence Science and Technology Organisation in support of the aircraft structural life assessment programs of the Australian Defence Force. The key objectives in developing this software platform were to provide a flexible, robust, economical, adaptable, and well verified and validated fatigue analysis tool. CGAP provides advanced capabilities for crack growth analyses, including crack growth in notch-affected plastic zones, and for probabilistic crack growth analyses. It also provides seamless interface to third-party models, such as FASTRAN and FAMS, enabling easy benchmarking against and collaborating with international partners. This paper summarises some of the recent developments in analytical and numerical fatigue damage and crack growth modelling, with emphasis on software verification and validation. Examples will be presented to illustrate its application.


1972 ◽  
Vol 35 (5) ◽  
pp. 279-284 ◽  
Author(s):  
G. H. Richardson

Recent developments in dairy laboratory instrumentation are reviewed. Microbiological, physical-chemical, and mastitis testing instrumentation, potentially valuable in providing milk payment and quality information, are receiving regulatory approva1. The economics associated with some of these devices suggest the need for development of third party, central milk and food testing laboratories.


2015 ◽  
Author(s):  
◽  
Hyun Shik Yoon

In recent times, the transactional scale of the consumer-to-consumer (C2C) e-commerce has grown rapidly and C2C e-commerce has become more popular. In C2C e-commerce, customers are likely to face more risk buying fake and poor quality products. However, the main stream of research has focused on business-to-consumer (B2C) e-commerce. In this research, a quantitative model of C2C e-commerce usage was developed, which incorporates five dimensions, namely, (1) personality dimension, including openness, extraversion, agreeableness, conscientiousness, and neuroticism, (2) usability dimension, including perceived ease of use, perceived usefulness, and perceived website quality, (3) risk dimension, including perceived security and perceived privacy, (4) green concern as a social influence dimension, and (5) institutional feature dimension, including buyer protection policy and third party recognition. This study provides a quantitative model to describe C2C e-commerce usage as a distinct area of research from B2C e-commerce. In addition, the result shows customers' purchase intention to use C2C e-commerce can be increased by redesigning C2C e-commerce websites.


This paper expounds the use of Blockchain to record marks of students as it is very tedious to record and verify candidate’s credentials for academia and employer purpose. We find it very difficult to maintain the student marks for any college or university. Any educational institution is duty bound to provide the student results at any point of time. Result of a student can be challenged at any point of time. So, any institute is supposed to store and maintain the results over a period of time. In this paper, an attempt has been made to solve some of the difficulties of students’ result management system. Now education uses Bitcoin technology to record credentials. We have used Block chain technology to record student’s achievements in a cheap, secure and public way. It is also beneficial for employer to spend valuable time checking educational credentials by having to call universities or to pay a third party to do so. In this paper, we have used ethereum as the underlying blockchain due to its large scalability and ease of use. For development, we used Truffle to develop smart contracts and integrated it with the frontend using Web3JS. For deployment, our smart contracts on the blockchain network, we have used Infura and frontend will be deployed on Heroku for user interaction. My block chain is currently on Ropsten Testnet.


2020 ◽  
Vol 245 ◽  
pp. 03032
Author(s):  
Alexey Anisenkov ◽  
Julia Andreeva ◽  
Alessandro Di Girolamo ◽  
Panos Paparrigopoulos ◽  
Boris Vasilev

CRIC is a high-level information system which provides flexible, reliable and complete topology and configuration description for a large scale distributed heterogeneous computing infrastructure. CRIC aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. It aggregates information coming from various low-level information sources and complements topology description with experimentspecific data structures and settings required by the LHC VOs in order to exploit computing resources. Being an experiment-oriented but still experiment-independent information middleware, CRIC offers a generic solution, in the form of a suitable framework with appropriate interfaces implemented, which can be successfully applied on the global WLCG level or at the level of a particular LHC experiment. For example there are CRIC instances for CMS[11] and ATLAS[10]. CRIC can even be used for a special task. For example, a dedicated CRIC instance has been built to support transfer tests performed by DOMA Third Party Copy working group. Moreover, extensibility and flexibility of the system allow CRIC to follow technology evolution and easily implement concepts required to describe new types of computing and storage resources. The contribution describes the overall CRIC architecture, the plug-in based implementation of the CRIC components as well as recent developments and future plans.


Sign in / Sign up

Export Citation Format

Share Document