The Real Blockchain Game Changer

Author(s):  
Mitchell Loureiro ◽  
Ana Pêgo ◽  
Inês Graça Raposo

The single critical output of a blockchain is creating trust where previously impossible. While this feature delivers compelling value for many use cases (bitcoin for money, standards setting and data sharing for permissioned blockchains, audit trails and protection against liability in supply chains), the most novel use case has been something unexpected: the birth of a new type of social structure to provide goods and services. The early examples of this new type of social structure have shown themselves to be incredibly effective at providing those services to their users.

2018 ◽  
Vol 57 (S 01) ◽  
pp. e57-e65 ◽  
Author(s):  
Fabian Prasser ◽  
Oliver Kohlbacher ◽  
Ulrich Mansmann ◽  
Bernhard Bauer ◽  
Klaus Kuhn

Summary Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on the German Medical Informatics Initiative. Future medicine will be predictive, preventive, personalized, participatory and digital. Data and knowledge at comprehensive depth and breadth need to be available for research and at the point of care as a basis for targeted diagnosis and therapy. Data integration and data sharing will be essential to achieve these goals. For this purpose, the consortium Data Integration for Future Medicine (DIFUTURE) will establish Data Integration Centers (DICs) at university medical centers. Objectives: The infrastructure envisioned by DIFUTURE will provide researchers with cross-site access to data and support physicians by innovative views on integrated data as well as by decision support components for personalized treatments. The aim of our use cases is to show that this accelerates innovation, improves health care processes and results in tangible benefits for our patients. To realize our vision, numerous challenges have to be addressed. The objective of this article is to describe our concepts and solutions on the technical and the organizational level with a specific focus on data integration and sharing. Governance and Policies: Data sharing implies significant security and privacy challenges. Therefore, state-of-the-art data protection, modern IT security concepts and patient trust play a central role in our approach. We have established governance structures and policies safeguarding data use and sharing by technical and organizational measures providing highest levels of data protection. One of our central policies is that adequate methods of data sharing for each use case and project will be selected based on rigorous risk and threat analyses. Interdisciplinary groups have been installed in order to manage change. Architectural Framework and Methodology: The DIFUTURE Data Integration Centers will implement a three-step approach to integrating, harmonizing and sharing structured, unstructured and omics data as well as images from clinical and research environments. First, data is imported and technically harmonized using common data and interface standards (including various IHE profiles, DICOM and HL7 FHIR). Second, data is preprocessed, transformed, harmonized and enriched within a staging and working environment. Third, data is imported into common analytics platforms and data models (including i2b2 and tranSMART) and made accessible in a form compliant with the interoperability requirements defined on the national level. Secure data access and sharing will be implemented with innovative combinations of privacy-enhancing technologies (safe data, safe settings, safe outputs) and methods of distributed computing. Use Cases: From the perspective of health care and medical research, our approach is disease-oriented and use-case driven, i.e. following the needs of physicians and researchers and aiming at measurable benefits for our patients. We will work on early diagnosis, tailored therapies and therapy decision tools with focuses on neurology, oncology and further disease entities. Our early uses cases will serve as blueprints for the following ones, verifying that the infrastructure developed by DIFUTURE is able to support a variety of application scenarios. Discussion: Own previous work, the use of internationally successful open source systems and a state-of-the-art software architecture are cornerstones of our approach. In the conceptual phase of the initiative, we have already prototypically implemented and tested the most important components of our architecture.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2017 ◽  
Vol 24 (3) ◽  
pp. 261-274 ◽  
Author(s):  
Manuel Rodríguez Díaz ◽  
Tomás F Espino Rodríguez

Online reputation is a strategic factor in determining the competitiveness and marketing capacity of lodging companies. The influence of online opinions on customers’ decisions is increasing, and, consequently, the online reputation is a new marketing tool to capture clients and reach sales objectives in the lodging industry. In this context, the reliability and validity of customer evaluations available on websites is an essential key to competing in a tourism market influenced by the development of the Internet. The objective of this study is to analyze three of the most important online reputation websites in tourism in order to establish the reliability and validity of the scales used in customer reviews. The results demonstrated that the three websites analyzed fulfill the conventional statistical criteria of reliability and validity. However, a new type of validity is formulated in this study in order to test the capacity of the scales to determine the similarities or differences between tourism goods and services. Nonparametric tests were carried out, demonstrating that although the three websites meet the conventional statistic criteria of reliability and validity, only Booking.com has the capacity to differentiate between destinations.


2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


2021 ◽  
Vol 8 (4) ◽  
pp. 19-34
Author(s):  
Samuel Nii Attoh Abbey

With the flagship success of M-Pesa, mobile devices have become an important tool to facilitate the financial inclusion of the previously unbanked population in developing countries. Following the success of M-Pesa in Kenya in 2007, mobile money technologies became widespread across Africa. Beginning in 2009, Ghana experienced exceptional adoption of Mobile Money technology. Many studies have examined the influence of mobile money on financial inclusion from a variety of perspectives, and many have concluded that mobile money is a game-changer in this regard. The Mobile Money concept has evolved based on introducing the other value-added services such as microloans, savings, and insurance portfolios. The researcher used a questionnaire and a face-to-face interview to obtain qualitative data for this study. Together with other research, the statistics revealed that Mobile Money transactions in Ghana had more than tripled since it became the most popular payment method. Over the last year, the platform as a service has created over 140,000 jobs and has shown to be the safest channel. It has several advantages, including lowering the cost of printing and keeping cash on hand, as well as decreasing fraud because the technology underlying it gives appropriate audit trails to prevent fraud and boost economic growth.


Author(s):  
Chris Fill ◽  
Scot McKee

This chapter explores some of the principal characteristics used to define business markets and marketing. It establishes the key elements of business-to-business (B2B) marketing and makes comparisons with the better-known business-to-consumer (B2C) sector. This leads to a consideration of appropriate definitions, parameters and direction for the book. After setting out the main types of organisations that operate in the B2B sector and categorising the goods and services that they buy or sell, the chapter introduces ideas about the business marketing mix, perceived value, supply chains, interorganisational relationships and relationship marketing. This opening chapter lays down the vital foundations and key principles which are subsequently developed in the book.


2016 ◽  
Vol 13 (2) ◽  
pp. 71-88 ◽  
Author(s):  
Jun Dai ◽  
Qiao Li

ABSTRACT Each year, governments around the world spend billions of dollars purchasing a wide variety of goods and services. These governments must spend their money wisely in order to eliminate fraud, waste, and abuse of taxpayer dollars. Although government contracting systems are supposed to be transparent, people may still take advantage of these systems to gain benefits, which leads to high-risk contracts and, sometimes, costly government frauds. Recently, governments in some countries have started open data initiatives in order to make government operations more transparent to their citizens. With the open data, a new type of auditor, called an armchair auditor, could play an important role in monitoring government spending. An armchair auditor could be anyone who has an interest in government expenditures and who usually uses technologies to perform analyses on open data. Few studies have discussed how armchair auditors can better use the open data and what data analytics tools could be applied. To that end, this paper proposes a list of audit apps that could assist armchair auditors in analyzing open government procurement data. These apps could help investigate procurement data from different perspectives, such as validating contractor qualification, detecting defective pricing, etc. This paper uses Brazilian federal government procurement contract data to illustrate the functionality of these apps; however, the apps could be applied to open government data in a variety of other nations.


Author(s):  
Mathias Uslar ◽  
Fabian Grüning ◽  
Sebastian Rohjans

Within this chapter, the authors provide two use cases on semantic interoperability in the electric utility industry based on the IEC TR 62357 seamless integration architecture. The first use case on semantic integration based on ontologies deals with the integration of the two heterogeneous standards families IEC 61970 and IEC 61850. Based on a quantitative analysis, we outline the need for integration and provide a solution based on our framework, COLIN. The second use cases points out the need to use better metadata semantics in the utility branch, also being solely based on the IEC 61970 standard. The authors provide a solution to use the CIM as a domain ontology and taxonomy for improving data quality. Finally, this chapter outlines open questions and argues that proper semantics and domain models based on international standards can improve the systems within a utility.


Sign in / Sign up

Export Citation Format

Share Document