scholarly journals Diversification of Legislation Editing Open Software (LEOS) Using Software Agents—Transforming Parliamentary Control of the Hellenic Parliament into Big Open Legal Data

2021 ◽  
Vol 5 (3) ◽  
pp. 45
Author(s):  
Sotiris Leventis ◽  
Fotios Fitsilis ◽  
Vasileios Anastasiou

The accessibility and reuse of legal data is paramount for promoting transparency, accountability and, ultimately, trust towards governance institutions. The aggregation of structured and semi-structured legal data inevitably leads to the big data realm and a series of challenges for the generation, handling, and analysis of large datasets. When it comes to data generation, LEOS represents a legal informatics tool that is maturing quickly. Now in its third release, it effectively supports the drafting of legal documents using Akoma Ntoso compatible schemes. However, the tool, originally developed for cooperative legislative drafting, can be repurposed to draft parliamentary control documents. This is achieved through the use of actor-oriented software components, referred to as software agents, which enable system interoperability by interlinking the text editing system with parliamentary control datasets. A validated corpus of written questions from the Hellenic Parliament is used to evaluate the feasibility of the endeavour, and the feasibility of using it as an authoring tool for written parliamentary questions and generation of standardised, open, legislative data. Systemic integration not only proves the tool’s versatility, but also opens up new grounds in interoperability between formerly unrelated legal systems and data sources.

Author(s):  
Pethuru Raj

The implications of the digitization process among a bevy of trends are definitely many and memorable. One is the abnormal growth in data generation, gathering, and storage due to a steady increase in the number of data sources, structures, scopes, sizes, and speeds. In this chapter, the author shows some of the impactful developments brewing in the IT space, how the tremendous amount of data getting produced and processed all over the world impacts the IT and business domains, how next-generation IT infrastructures are accordingly getting refactored, remedied, and readied for the impending big data-induced challenges, how likely the move of the big data analytics discipline towards fulfilling the digital universe requirements of extracting and extrapolating actionable insights for the knowledge-parched is, and finally, the establishment and sustenance of the dreamt smarter planet.


2022 ◽  
pp. 104-124
Author(s):  
Hugo Garcia Tonioli Defendi ◽  
Vanessa de Arruda Jorge ◽  
Ana Paula da Silva Carvalho ◽  
Luciana da Silva Madeira ◽  
Suzana Borschiver

The process of knowledge construction, widely discussed in the literature, follows a common structure that encompasses transformation of data into information and then into knowledge, which converges social, technological, organizational, and strategic aspects. The advancement of information technologies and growing global research efforts in the health field has dynamically generated large datasets, thus providing potential innovative solutions to health problems, posing important challenges in selection and interpretation of useful information and possibilities. COVID-19 pandemic has intensified this data generation as results of global efforts, and cooperation has promoted a level of scientific production never experienced before concerning the overcoming of the pandemic. In this context, the search for an effective and safe vaccine that can prevent the spread of this virus has become a common goal of societies, governments, institutions, and companies. These collaborative efforts have contributed to speed up the development of these vaccines at an unprecedented pace in history.


2017 ◽  
Vol 27 (09n10) ◽  
pp. 1579-1589 ◽  
Author(s):  
Reinier Morejón ◽  
Marx Viana ◽  
Carlos Lucena

Data mining is a hot topic that attracts researchers of different areas, such as database, machine learning, and agent-oriented software engineering. As a consequence of the growth of data volume, there is an increasing need to obtain knowledge from these large datasets that are very difficult to handle and process with traditional methods. Software agents can play a significant role performing data mining processes in ways that are more efficient. For instance, they can work to perform selection, extraction, preprocessing, and integration of data as well as parallel, distributed, or multisource mining. This paper proposes a framework based on multiagent systems to apply data mining techniques to health datasets. Last but not least, the usage scenarios that we use are datasets for hypothyroidism and diabetes and we run two different mining processes in parallel in each database.


Author(s):  
Ronald P. Reck ◽  
Kenneth B. Sall ◽  
Wendy A. Swanbeck

As music is a topic of interest to many, it is no surprise that developers have applied web and semantic technology to provide various RDF datasets for describing relationships among musical artists, albums, songs, genres, and more. As avid fans of blues and rock music, we wondered if we could construct SPARQL queries to examine properties and relationships between performers in order to answer global questions such as "Who has had the greatest impact on rock music?" Our primary focus was Eric Clapton, a musical artist with a decades-spanning career who has enjoyed both a very successful solo career as well as having performed in several world-renowned bands. The application of semantic technology to a public dataset can provide useful insights into how similar approaches can be applied to realistic domain problems, such as finding relationships between persons of interest. Clearly understood semantics of available RDF properties in the dataset is of course crucial but is a substantial challenge especially when leveraging information from similar yet different data sources. This paper explores the use of DBpedia and MusicBrainz data sources using OpenLink Virtuoso Universal Server with a Drupal frontend. Much attention is given to the challenges we encountered, especially with respect to relatively large datasets of community-entered open data sources of varying quality and the strategies we employed or recommend to overcome the challenges.


2011 ◽  
Vol 105 (4) ◽  
pp. 702-716 ◽  
Author(s):  
DAVID SKARBEK

How can people who lack access to effective government institutions establish property rights and facilitate exchange? The illegal narcotics trade in Los Angeles has flourished despite its inability to rely on state-based formal institutions of governance. An alternative system of governance has emerged from an unexpected source—behind bars. The Mexican Mafia prison gang can extort drug dealers on the street because they wield substantial control over inmates in the county jail system and because drug dealers anticipate future incarceration. The gang's ability to extract resources creates incentives for them to provide governance institutions that mitigate market failures among Hispanic drug-dealing street gangs, including enforcing deals, protecting property rights, and adjudicating disputes. Evidence collected from federal indictments and other legal documents related to the Mexican Mafia prison gang and numerous street gangs supports this claim.


Author(s):  
Mario Cannataro

Bioinformatics involves the design and development of advanced algorithms and computational platforms to solve problems in biomedicine (Jones & Pevzner, 2004). It also deals with methods for acquiring, storing, retrieving and analysing biological data obtained by querying biological databases or provided by experiments. Bioinformatics applications involve different datasets as well as different software tools and algorithms. Such applications need semantic models for basic software components and need advanced scientific portal services able to aggregate such different components and to hide their details and complexity from the final user. For instance, proteomics applications involve datasets, either produced by experiments or available as public databases, as well as a huge number of different software tools and algorithms. To use such applications it is required to know both biological issues related to data generation and results interpretation and informatics requirements related to data analysis.


Big Data ◽  
2016 ◽  
pp. 757-777
Author(s):  
Pethuru Raj

The implications of the digitization process among a bevy of trends are definitely many and memorable. One is the abnormal growth in data generation, gathering, and storage due to a steady increase in the number of data sources, structures, scopes, sizes, and speeds. In this chapter, the authors show some of the impactful developments brewing in the IT space, how the tremendous amount of data getting produced and processed all over the world impacts the IT and business domains, how next-generation IT infrastructures are accordingly being refactored, remedied, and readied for the impending big data-induced challenges, how likely the move of the big data analytics discipline towards fulfilling the digital universe requirements of extracting and extrapolating actionable insights for the knowledge-parched is, and finally, the establishment and sustenance of the smarter planet.


2021 ◽  
Vol 12 (1) ◽  
pp. 1-18
Author(s):  
Sagar Samtani ◽  
Murat Kantarcioglu ◽  
Hsinchun Chen

Events such as Facebook-Cambridge Analytica scandal and data aggregation efforts by technology providers have illustrated how fragile modern society is to privacy violations. Internationally recognized entities such as the National Science Foundation (NSF) have indicated that Artificial Intelligence (AI)-enabled models, artifacts, and systems can efficiently and effectively sift through large quantities of data from legal documents, social media, Dark Web sites, and other sources to curb privacy violations. Yet considerable efforts are still required for understanding prevailing data sources, systematically developing AI-enabled privacy analytics to tackle emerging challenges, and deploying systems to address critical privacy needs. To this end, we provide an overview of prevailing data sources that can support AI-enabled privacy analytics; a multi-disciplinary research framework that connects data, algorithms, and systems to tackle emerging AI-enabled privacy analytics challenges such as entity resolution, privacy assistance systems, privacy risk modeling, and more; a summary of selected funding sources to support high-impact privacy analytics research; and an overview of prevailing conference and journal venues that can be leveraged to share and archive privacy analytics research. We conclude this paper with an introduction of the papers included in this special issue.


2015 ◽  
pp. 187-221
Author(s):  
Pethuru Raj

The implications of the digitization process among a bevy of trends are definitely many and memorable. One is the abnormal growth in data generation, gathering, and storage due to a steady increase in the number of data sources, structures, scopes, sizes, and speeds. In this chapter, the author shows some of the impactful developments brewing in the IT space, how the tremendous amount of data getting produced and processed all over the world impacts the IT and business domains, how next-generation IT infrastructures are accordingly getting refactored, remedied, and readied for the impending big data-induced challenges, how likely the move of the big data analytics discipline towards fulfilling the digital universe requirements of extracting and extrapolating actionable insights for the knowledge-parched is, and finally, the establishment and sustenance of the dreamt smarter planet.


Sign in / Sign up

Export Citation Format

Share Document