Telecommunication Industry

2015 ◽  
pp. 1162-1174
Author(s):  
Fredrik Solsvik ◽  
Michel Dao

The operators Telefónica, Orange, and Telenor represent the telecommunication industry in the VISION Cloud project. Together, they provide a telco-oriented use case, which provides feedback and requirements to the work on the reference architecture being developed. The use cases are developed based on the challenges and opportunities that are identified that relate to storage and mobility technologies. The use cases validate the reference architecture of VISION Cloud based on prototype tests and experimentations that enable the use case to be evaluated in scenarios. Telecommunication industry challenges are being addressed by the advancements made in the VISION Cloud project iterations, which takes the inputs from the telco use case and other use cases into consideration. This chapter is a study in the telecommunication industry challenges and possibilities with respect to the cloud storage technology advancements made in VISION Cloud.

Author(s):  
Fredrik Solsvik ◽  
Michel Dao

The operators Telefónica, Orange, and Telenor represent the telecommunication industry in the VISION Cloud project. Together, they provide a telco-oriented use case, which provides feedback and requirements to the work on the reference architecture being developed. The use cases are developed based on the challenges and opportunities that are identified that relate to storage and mobility technologies. The use cases validate the reference architecture of VISION Cloud based on prototype tests and experimentations that enable the use case to be evaluated in scenarios. Telecommunication industry challenges are being addressed by the advancements made in the VISION Cloud project iterations, which takes the inputs from the telco use case and other use cases into consideration. This chapter is a study in the telecommunication industry challenges and possibilities with respect to the cloud storage technology advancements made in VISION Cloud.


2018 ◽  
Vol 57 (S 01) ◽  
pp. e92-e105 ◽  
Author(s):  
Alfred Winter ◽  
Sebastian Stäubert ◽  
Danny Ammon ◽  
Stephan Aiche ◽  
Oya Beyan ◽  
...  

Summary Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on the German Medical Informatics Initiative. “Smart Medical Information Technology for Healthcare (SMITH)” is one of four consortia funded by the German Medical Informatics Initiative (MI-I) to create an alliance of universities, university hospitals, research institutions and IT companies. SMITH’s goals are to establish Data Integration Centers (DICs) at each SMITH partner hospital and to implement use cases which demonstrate the usefulness of the approach. Objectives: To give insight into architectural design issues underlying SMITH data integration and to introduce the use cases to be implemented. Governance and Policies: SMITH implements a federated approach as well for its governance structure as for its information system architecture. SMITH has designed a generic concept for its data integration centers. They share identical services and functionalities to take best advantage of the interoperability architectures and of the data use and access process planned. The DICs provide access to the local hospitals’ Electronic Medical Records (EMR). This is based on data trustee and privacy management services. DIC staff will curate and amend EMR data in the Health Data Storage. Methodology and Architectural Framework: To share medical and research data, SMITH’s information system is based on communication and storage standards. We use the Reference Model of the Open Archival Information System and will consistently implement profiles of Integrating the Health Care Enterprise (IHE) and Health Level Seven (HL7) standards. Standard terminologies will be applied. The SMITH Market Place will be used for devising agreements on data access and distribution. 3LGM2 for enterprise architecture modeling supports a consistent development process.The DIC reference architecture determines the services, applications and the standards-based communication links needed for efficiently supporting the ingesting, data nourishing, trustee, privacy management and data transfer tasks of the SMITH DICs. The reference architecture is adopted at the local sites. Data sharing services and the market place enable interoperability. Use Cases: The methodological use case “Phenotype Pipeline” (PheP) constructs algorithms for annotations and analyses of patient-related phenotypes according to classification rules or statistical models based on structured data. Unstructured textual data will be subject to natural language processing to permit integration into the phenotyping algorithms. The clinical use case “Algorithmic Surveillance of ICU Patients” (ASIC) focusses on patients in Intensive Care Units (ICU) with the acute respiratory distress syndrome (ARDS). A model-based decision-support system will give advice for mechanical ventilation. The clinical use case HELP develops a “hospital-wide electronic medical record-based computerized decision support system to improve outcomes of patients with blood-stream infections” (HELP). ASIC and HELP use the PheP. The clinical benefit of the use cases ASIC and HELP will be demonstrated in a change of care clinical trial based on a step wedge design. Discussion: SMITH’s strength is the modular, reusable IT architecture based on interoperability standards, the integration of the hospitals’ information management departments and the public-private partnership. The project aims at sustainability beyond the first 4-year funding period.


Author(s):  
Priya Yadav ◽  
Pranjeet Das ◽  
Ravi Kumar Malhotra

E-commerce is process of doing business through computer networks. Advances in wireless network technology and the continuously increasing number of users of mobile latter on make an ideal platform for offering various high utilityservices in just a snap of a finger to the mobile users and give pace to the rapid development of E-Commerce in India.E-commerce is considered an excellent alternative for companies to reach new customersbut the fact that has hindered the growth of e-commerce is security. Security is the challenge facing e-commerce today and there is still a lot of advancement made in the field of securityfor increasing the use of e-commerce in developing countries the B2B e-commerce is implemented for improving access to global markets for firms in developing countries. With the special characteristics and constraints of mobile terminals and wireless networks and the context, situations and circumstances that people use their hand-held terminalswhich will ultimately fuel explosive ecommerce growth in India This paper highlights the various key challenges and opportunities which Indian e-commerce industry may face in the upcoming years. And also discuss challenges in electronic commerce transactions.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


Author(s):  
Emrah Inan ◽  
Burak Yonyul ◽  
Fatih Tekbacak

Most of the data on the web is non-structural, and it is required that the data should be transformed into a machine operable structure. Therefore, it is appropriate to convert the unstructured data into a structured form according to the requirements and to store those data in different data models by considering use cases. As requirements and their types increase, it fails using one approach to perform on all. Thus, it is not suitable to use a single storage technology to carry out all storage requirements. Managing stores with various type of schemas in a joint and an integrated manner is named as 'multistore' and 'polystore' in the database literature. In this paper, Entity Linking task is leveraged to transform texts into wellformed data and this data is managed by an integrated environment of different data models. Finally, this integrated big data environment will be queried and be examined by presenting the method.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3606-3611

Big data privacy has assumed importance as the cloud computing became a phenomenal success in providing a remote platform for sharing computing resources without geographical and time restrictions. However, the privacy concerns on the big data being outsourced to public cloud storage are still exist. Different anonymity or sanitization techniques came into existence for protecting big data from privacy attacks. In our prior works, we have proposed a misusability probability based metric to know the probable percentage of misusability. We additionally planned a system that suggests level of sanitization before actually applying privacy protection to big data. It was based on misusability probability. In this paper, our focus is on further evaluation of our misuse probability based sanitization of big data approach by defining an algorithm which willanalyse the trade-offs between misuse probability and level of sanitization. It throws light into the proposed framework and misusability measure besides evaluation of the framework with an empirical study. Empirical study is made in public cloud environment with Amazon EC2 (compute engine), S3 (storage service) and EMR (MapReduce framework). The experimental results revealed the dynamics of the trade-offs between them. The insights help in making well informed decisions while sanitizing big data to ensure that it is protected without losing utility required.


2021 ◽  
pp. 073168442110552
Author(s):  
Xiao Xue ◽  
Shu-Yan Liu ◽  
Zhao-Yang Zhang ◽  
Qing-Zhou Wang ◽  
Cheng-Zhi Xiao

The rapidly rising demand for fiber-reinforced plastics (FRPs) has led to large volumes of manufacturing and end-of-life waste. Recycling fiber-reinforced thermosets is very difficult owing to their complex structure and heterogeneity. Landfill and incineration have become the most commonly used methods for eliminating non-degradable FRP waste, which adversely affects the environment and ecology. The purpose of this review is to evaluate end-of-life FRP recycling technologies in terms of optimizing the reuse/recycling of resources and eliminating waste, thereby improving FRP waste management. The technical progress made in the recycling of thermosetting composites is reviewed, including mechanical, thermal (pyrolysis and fluidized-bed), and chemical (critical fluid and low-temperature solvent) methods. The technical feasibility of each method was compared, and the economic and environmental impacts were considered. The challenges and opportunities facing the establishment of a composite recycling market in the future are examined. Finally, we provide a comprehensive summary of the scope of each recycling method.


Sign in / Sign up

Export Citation Format

Share Document